bandwidth test version - iguana for unix

jl777

Active Member
Feb 26, 2016
279
345
I was bothered by losing half the performance, so I experimented to find and fix this.
I had to throw out last weeks code dealing with scripts and come up with a new way to stream the data.

The performance is still a bit choppy, but I am seeing bursts above 100MB/sec, which is double what it was before I streamlined things and close to what it used to be without the scripts data. Now I put all the vin scripts in the vins directory and the vout scripts in the vouts directory. I also went to a brute force one file per block within a bundle directory in the tmp, probably just for debug but it is quite handy to have the first pass data in separate files.

For now, it would help if people could run bandwidth tests using bmon. The big question is what percentage of available bandwidth does iguana use. Right now, it goes as fast as it can, we can throttle it down later if it is an issue.

Theoretically, the data is perfect and it can be used for wallet operations, but of course I will need to fully validate the bundle files by brute force comparing it to a known good reference. So for now, just an indication of what percentage of bandwidth iguana uses will be helpful.

For unix:
git clone https://github.com/jl777/SuperNET
cd SuperNET
./m_onetime m_unix
cd iguana
./m_unix

Now iguana is built in the SuperNET/agents directory. but the directory structure is set up to expect you run iguana from SuperNET/iguana directory.

../agents/iguana "{\"agent\":\"iguana\",\"method\":\"addcoin\",\"newcoin\":\"BTC\",\"active\":1,\"services\":129,\"maxpeers\":128,\"numhelpers\":4,\"poll\":1}"

It will print massive amounts of debug output to the console. you can nohup it if that bothers you.

the status printout is:

1st.7 N[202] Q.129 h.378293 r.209401 c.0:160000:0 s.302128 d.81 E.80:128139 M.56000 L.402270 est.8 1.97GB 0:16:03 158.573 peers.35/256 Q.(0 0)

1st.7 means bundle 7 is the first one that is not complete
s.302128 means 302128 blocks saved
d.81 done downloading raw files for 81 bundles
E.80 means the bundle file is completed
M.56000 is how far the mainchain has been validated (via headers)
L.402270 is the longest chain seen by peers
0:16:03 16 minutes 3 seconds

I tend to make many test versions and during this period things are likely broken, so if you see many recent version in git, safer to go back to the one that didnt change for a while

James

P.S. http://docs.supernet.org has API bindings for sending requests over localhost. After the first build, just do another ./m_unix to build the latest version
 
Last edited:

jl777

Active Member
Feb 26, 2016
279
345
I am seeing the bandwidth range from 1 MB/sec to 100 MB/sec, 100x variance, which is not normal.

The uncompressed file sizes are 35GB for the "DB", 7.5GB for the vout scripts and 33GB for the vins (mostly sigs and pubkeys). After validation the vins wont be needed so no sense to compress them. The bundle files are well worth the time to gain 17GB with a 2:1 compression. the vouts file didnt compress so well due to the high entropy content, but still 2.5GB smaller, so that comes in right around the estimated 20GB footprint for a full validating node.

the bandwidth variance could just be the HDD contention, but without SSD to test with I cant verify this.
 

jl777

Active Member
Feb 26, 2016
279
345
I think I fixed the variance issue:

1st.2 N[202] Q.77 h.164187 r.30260 c.0:30000:0 s.69563 d.15 E.15:66968 M.14000 L.402280 est.8 851.007kb 0:01:08 647.630 peers.101/256 Q.(0 0)

I am seeing it stay above 1gbps for a few seconds, but mostly in the 70MB/sec range

1st.7 N[202] Q.106 h.222140 r.70167 c.0:70000:0 s.167839 d.36 E.35:166524 M.14000 L.402280 est.8 935.290kb 0:04:08 138.473 peers.103/256 Q.(0 0)

ramchaindata have 127:696 at 0 | 52 blocks 3945.898kb redundant xfers total 14.82GB 0.03% wasted

1st.7 N[202] Q.121 h.266391 r.142131 c.0:142000:0 s.222612 d.73 E.71:220074 M.14000 L.402281 est.8 3087.705kb 0:08:03 28.531 peers.107/256 Q.(0 0)

now it is hitting some dropouts and only getting 10MB/sec. this is why I want to bittorrent the bundle files so we can get the bundle files delivered in minimum time and get a much more predictable CPU usage for scheduling the validation tasks

1st.7 N[202] Q.129 h.286377 r.166120 c.0:166000:0 s.252198 d.92 E.83:250069 M.14000 L.402281 est.8 4506.924kb 0:12:09 894.352 peers.110/256 Q.(0 0)

1st.7 N[202] Q.154 h.374305 r.192119 c.0:192000:0 s.280454 d.105 E.96:281791 M.65998 L.402281 est.8 6303.769kb 0:16:56 808.115 peers.114/256 Q.(0 0)

Not bad, but I saw a bit better with a different sent of parameters
[doublepost=1457757504,1457756673][/doublepost]1st.0 N[202] Q.127 h.264407 r.74177 c.0:74000:0 s.179542 d.37 E.37:181147 M.0 L.402281 est.8 76.596kb 0:03:59 661.598 peers.115/256 Q.(0 0)

1st.0 N[202] Q.133 h.274399 r.116156 c.0:116000:0 s.243595 d.64 E.58:252848 M.56000 L.402281 est.8 286.070kb 0:08:03 336.365 peers.116/256 Q.(0 0)

and it hit a rough patch and slowed down

1st.0 N[202] Q.128 h.278398 r.146141 c.0:146000:0 s.255271 d.80 E.73:259433 M.64000 L.402282 est.8 2052.205kb 0:12:09 96.198 peers.119/256 Q.(0 0)
 
  • Like
Reactions: damon

grewalsatinder

New Member
Mar 3, 2016
10
7
Hi James,

I ran a latest compile of iguana on ARM machine (Odroid-XU4):

Here's the output:

Code:
1st.0 N[202] Q.23 h.44206 r.2311 c.0:2000:0 s.16807 d.1 E.1:22501 M.14000 L.402343 est.8 0.211kb 0:01:19 578.874 peers.58/128 Q.(0 7594)
129.13.252.36 recv error on hdr errno.104 (Connection reset by peer) -> zombify
1st.0 N[202] Q.22 h.44206 r.2336 c.0:2000:0 s.20385 d.1 E.1:27473 M.14000 L.402343 est.8 0.211kb 0:01:30 629.770 peers.58/128 Q.(0 5919)
1st.0 N[202] Q.23 h.56197 r.2350 c.0:2000:0 s.23535 d.1 E.1:33296 M.16000 L.402343 est.8 0.211kb 0:01:41 784.205 peers.58/128 Q.(0 5832)
ramchaindata have 0:1722 at 0 | 679 blocks 153.521kb redundant xfers total 17.3MB 0.87% wasted
1st.0 N[202] Q.24 h.56198 r.2368 c.0:2000:0 s.28293 d.1 E.1:38817 M.16000 L.402343 est.8 0.211kb 0:01:53 45.770 peers.58/128 Q.(0 1808)
multisig.1 of 2: invalid pubkey[20] len -1
multisig.1 of 2: invalid pubkey[16] len -1
1st.0 N[202] Q.27 h.56198 r.2394 c.0:2000:0 s.33964 d.1 E.1:41331 M.16000 L.402343 est.8 0.211kb 0:02:04 125.099 peers.58/128 Q.(0 0)
1st.0 N[202] Q.26 h.56198 r.2418 c.0:2000:0 s.38718 d.1 E.1:46750 M.16000 L.402343 est.8 0.211kb 0:02:15 428.761 peers.58/128 Q.(0 1429)
ramchaindata have 32:667 at 0 | 1188 blocks 425.456kb redundant xfers total 24.2MB 1.72% wasted
>>>>>>>>>>>>>>>>>>>>>>> EMIT bundle.8000
iguana_bundlesaveHT.8000 -> total (2010 2010 2120) scriptspace.2000 (pubkeys.0 sigs.0) dest->txids (nil)
init T.2011 U.2011 S.2121 P.2011 X.2120 -> 691419
8000 SAVE: Koffset.689419 scriptoffset.1 stacksize.0 allocsize.691419 gap.-1999 RO.0
3f494f2e 314c7ce4 77eb2102 eb53cf7c 99cf160b 43690ceb 93323159 9c0b346e 6444379a 6e58665a f86c5f040992efdf ht.8000
d.0 ht.8000 607.355kb saved lag.0 elapsed.0
1st.0 N[202] Q.24 h.56198 r.4440 c.0:4000:0 s.41020 d.2 E.2:51502 M.16000 L.402343 est.8 0.421kb 0:02:26 458.951 peers.58/128 Q.(0 3045)
multisig.1 of 2: invalid pubkey[20] len -1
1st.0 N[202] Q.28 h.58197 r.4467 c.0:4000:0 s.44274 d.2 E.2:54544 M.16000 L.402343 est.8 0.421kb 0:02:38 675.661 peers.58/128 Q.(0 0)
>>>>>>>>>>>>>>>>>>>>>>> EMIT bundle.12000
error mapping.(tmp/BTC/12000/0000000064a39b87396c12eeaa33fc81df4ad27c8e10d5e23e2260d7372752c4_6.251) bundlei.251
iguana_bundlesaveHT: no bundlefiles error
>>>>>>>>>>>>>>>>>>>>>>> EMIT bundle.42000
iguana_bundlesaveHT.42000 -> total (2042 2045 2969) scriptspace.2000 (pubkeys.0 sigs.0) dest->txids (nil)
init T.2043 U.2046 S.2970 P.2046 X.2969 -> 740424
42000 SAVE: Koffset.738424 scriptoffset.1 stacksize.0 allocsize.740424 gap.-1999 RO.0
4152f7e5 11f7c750 31121ac0 ee30da2e 5ecfeb6b 3acadd37 0d163a91 9c0b346e a43115fb 2ea500c8 ffb495fbb963b3f7 ht.42000
d.0 ht.42000 648.688kb saved lag.0 elapsed.0
ramchaindata have 33:1388 at 0 | 1569 blocks 1161.248kb redundant xfers total 36.7MB 3.09% wasted
1st.0 N[202] Q.24 h.58197 r.6489 c.0:6000:0 s.44615 d.3 E.3:56725 M.16000 L.402343 est.8 0.631kb 0:02:49 641.049 peers.58/128 Q.(0 0)
>>>>>>>>>>>>>>>>>>>>>>> EMIT bundle.0
1st.0 N[202] Q.25 h.58197 r.6513 c.0:6000:0 s.48318 d.3 E.3:56948 M.16000 L.402343 est.8 0.631kb 0:03:00 171.677 peers.58/128 Q.(0 0)
iguana_bundlesaveHT.0 -> total (2030 2041 2111) scriptspace.2000 (pubkeys.0 sigs.0) dest->txids (nil)
init T.2031 U.2042 S.2112 P.2042 X.2111 -> 694696
0 SAVE: Koffset.692696 scriptoffset.1 stacksize.0 allocsize.694696 gap.-1999 RO.0
481a6ed6 2bf8a382 151d22ed 4e9508b7 0bc16ba3 f505272f 42c4b0e3 9c0b346e f9920338 44ca3a7d fd51b9d32d9b12f1 ht.0
d.0 ht.0 609.829kb saved lag.0 elapsed.0
>>>>>>>>>>>>>>>>>>>>>>> EMIT bundle.2000
iguana_bundlesaveHT.2000 -> total (2028 2043 2136) scriptspace.2000 (pubkeys.0 sigs.0) dest->txids (nil)
init T.2029 U.2044 S.2137 P.2044 X.2136 -> 696017
2000 SAVE: Koffset.694017 scriptoffset.1 stacksize.0 allocsize.696017 gap.-1999 RO.0
f6014817 68d44409 b72ec092 1b787228 1b56c448 1adbf688 266dfab1 9c0b346e 6c6ded82 67093610 7144f1a406a45d6d ht.2000
d.0 ht.2000 611.369kb saved lag.0 elapsed.1
1st.2 N[202] Q.24 h.58197 r.10376 c.0:10000:0 s.47300 d.5 E.5:57276 M.16000 L.402343 est.8 33.075kb 0:03:11 622.154 peers.58/128 Q.(0 0)
HWM allocated 860423952 820.6MB
GENERATE UTXO, verify sigs, etc for ht.8000
>>>>>>>>>>>>>>>>>>>>>>> EMIT bundle.68000
>>>>>>>>>>>>>>>>>>>>>>> EMIT bundle.38000
iguana_bundlesaveHT.68000 -> total (4857 6622 6870) scriptspace.2000 (pubkeys.0 sigs.0) dest->txids (nil)
init T.4858 U.6623 S.6871 P.6623 X.6870 -> 1492779
68000 SAVE: Koffset.1490779 scriptoffset.1 stacksize.0 allocsize.1492779 gap.-1999 RO.0
7323ff0b 5ad3c5fa 7f63c539 ea4e344b bc018369 381dacfb d752dc5b 9c0b346e ab84e2de 328da706 ab379d74918f0700 ht.68000
d.0 ht.68000 1249.387kb saved lag.0 elapsed.1
iguana_bundlesaveHT.38000 -> total (2056 2065 3608) scriptspace.2000 (pubkeys.0 sigs.0) dest->txids (nil)
init T.2057 U.2066 S.3609 P.2066 X.3608 -> 777171
38000 SAVE: Koffset.775171 scriptoffset.1 stacksize.0 allocsize.777171 gap.-1999 RO.0
cc4c458f c51b6253 024126e3 1b00dfd7 83ca3855 39ff94f3 2a15793b 9c0b346e a8e67a44 1616aec8 e48a6ca05ed2dff2 ht.38000
d.0 ht.38000 681.730kb saved lag.0 elapsed.1
1st.2 N[202] Q.24 h.58197 r.14390 c.0:14000:0 s.51309 d.7 E.7:59828 M.16000 L.402343 est.8 35.452kb 0:03:22 91.280 peers.58/128 Q.(0 0)
ramchaindata have 37:1679 at 0 | 1675 blocks 1756.035kb redundant xfers total 49.5MB 3.47% wasted
>>>>>>>>>>>>>>>>>>>>>>> EMIT bundle.10000
iguana_bundlesaveHT.10000 -> total (2012 2015 2144) scriptspace.2000 (pubkeys.0 sigs.0) dest->txids (nil)
init T.2013 U.2016 S.2145 P.2016 X.2144 -> 693188
10000 SAVE: Koffset.691188 scriptoffset.1 stacksize.0 allocsize.693188 gap.-1999 RO.0
34f1eca0 76a3238d 6330d5df fc2f6765 543060f1 308a4494 8b6f81c4 9c0b346e b6615cb3 f133fbda 4dc9516292d72931 ht.10000
d.0 ht.10000 610.161kb saved lag.0 elapsed.0
Full output is here: http://pastebin.com/Ggd5Kx0T
[doublepost=1457793445,1457792411][/doublepost]And this is from MacbookPro

Code:
iguana_bundlesaveHT: no bundlefiles error
ALLHASHES FOUND! 29500 allhashes.57
ERROR BINDING PORT.8333. will exit. wait up to a minute and try again. dont worry, this is normal
1st.0 N[1779] Q.57 h.31401 r.1243 c.0:0:0 s.13971 d.0 E.0:27486 M.0 L.1000235 est.8 0.000kb 0:06:16 54.998 peers.32/64 Q.(0 10409)
bind(0.0.0.0) port.8333 try again: Operation timed out sock.46. errno.60
nServices.1 nonce.3613016923322617345 relay node.(100.3.29.33:14631) supernet.0
ERROR BINDING PORT.8333. will exit. wait up to a minute and try again. dont worry, this is normal
ramchaindata have 17:233 at 0 | 3600 blocks 3731.176kb redundant xfers total 63.4MB 5.74% wasted
1st.6 N[200] Q.28 h.74184 r.24054 c.0:24000:0 s.41976 d.12 E.12:91930 M.12000 L.402343 est.8 397.053kb 0:06:28 278.129 peers.66/128 Q.(0 9361)
bind(0.0.0.0) port.8333 try again: Operation timed out sock.21. errno.60
new peers.txt 322 vs (confs/BTCD_peers.txt) 304
ERROR BINDING PORT.8333. will exit. wait up to a minute and try again. dont worry, this is normal
bind(0.0.0.0) port.8333 try again: Operation timed out sock.21. errno.60
1st.6 N[200] Q.28 h.76182 r.24054 c.0:24000:0 s.43003 d.12 E.12:94650 M.12000 L.402343 est.8 397.053kb 0:06:39 264.901 peers.66/128 Q.(0 8571)
ERROR BINDING PORT.8333. will exit. wait up to a minute and try again. dont worry, this is normal
>>>>>>>>>>>>>>>>>>>>>>> EMIT bundle.22000
bind(0.0.0.0) port.8333 try again: Operation timed out sock.21. errno.60
iguana_bundlesaveHT.22000 -> total (2011 2012 2262) scriptspace.2000 (pubkeys.0 sigs.0) dest->txids 0x0
init T.2012 U.2013 S.2263 P.2013 X.2262 -> 699024
22000 SAVE: Koffset.697024 scriptoffset.1 stacksize.0 allocsize.699024 gap.-1999 RO.0
30c33490 521a7ece 83c2b7b2 554f557d b1c52ff7 683f6716 adfa69c5 9c0b346e 02a77b52 88f3d74d 1b8eb21acb8b10e3 ht.22000
d.0 ht.22000 611.360kb saved lag.2 elapsed.0
1st.6 N[200] Q.28 h.76182 r.26053 c.0:26000:0 s.42999 d.13 E.13:97632 M.12000 L.402343 est.8 397.264kb 0:06:50 311.371 peers.66/128 Q.(0 8283)
ERROR BINDING PORT.8333. will exit. wait up to a minute and try again. dont worry, this is normal
bind(0.0.0.0) port.8333 try again: Operation timed out sock.21. errno.60
ramchaindata have 17:254 at 0 | 3725 blocks 3775.693kb redundant xfers total 64.7MB 5.70% wasted
1st.6 N[200] Q.28 h.76182 r.26053 c.0:26000:0 s.42642 d.13 E.13:100669 M.12000 L.402343 est.8 397.264kb 0:07:01 353.767 peers.66/128 Q.(0 7771)
ERROR BINDING PORT.8333. will exit. wait up to a minute and try again. dont worry, this is normal
bind(0.0.0.0) port.8333 try again: Operation timed out sock.21. errno.60
ERROR BINDING PORT.8333. will exit. wait up to a minute and try again. dont worry, this is normal
1st.0 N[1779] Q.57 h.31548 r.1390 c.0:0:0 s.14702 d.0 E.0:28422 M.0 L.1000237 est.8 0.000kb 0:07:11 252.179 peers.32/64 Q.(0 9976)
bind(0.0.0.0) port.8333 try again: Operation timed out sock.21. errno.60
ALLHASHES FOUND! 34000 allhashes.58
ERROR BINDING PORT.8333. will exit. wait up to a minute and try again. dont worry, this is normal
ALLHASHES FOUND! 34500 allhashes.59
bind(0.0.0.0) port.8333 try again: Operation timed out sock.21. errno.60
1st.6 N[200] Q.29 h.78181 r.26053 c.0:26000:0 s.44750 d.13 E.13:105760 M.12000 L.402343 est.8 397.264kb 0:07:23 118.090 peers.66/128 Q.(0 9318)
ERROR BINDING PORT.8333. will exit. wait up to a minute and try again. dont worry, this is normal
ramchaindata have 19:13 at 0 | 3897 blocks 3901.036kb redundant xfers total 66.1MB 5.77% wasted
ALLHASHES FOUND! 32000 allhashes.60
bind(0.0.0.0) port.8333 try again: Operation timed out sock.21. errno.60
1st.0 N[1779] Q.60 h.33099 r.1447 c.0:0:0 s.14938 d.0 E.0:29010 M.0 L.1000237 est.8 0.000kb 0:07:33 111.613 peers.32/64 Q.(0 11090)
ERROR BINDING PORT.8333. will exit. wait up to a minute and try again. dont worry, this is normal
 

jl777

Active Member
Feb 26, 2016
279
345
made some improvements:
1st.36 N[202] Q.79 h.270495 r.136081 c.0:136000:0 s.197237 d.71 E.68:198018 M.65999 L.402386 est.128 567.074kb 0:02:06 805.741 peers.125/256 Q.(0 26063)

1st.50 N[202] Q.91 h.398391 r.248057 c.0:248000:0 s.339130 d.136 E.124:340090 M.100000 L.402387 est.128 5995.041kb 0:11:00 20.810 peers.126/256 Q.(0 8330)

1st.153 N[202] Q.74 h.402387 r.258054 c.0:258000:0 s.383721 d.165 E.129:384999 M.248000 L.402387 est.128 6523.452kb 0:16:39 90.065 peers.126/256 Q.(0 17474)

ramchaindata have 129:1888 at 0 | 90 blocks 9405.356kb redundant xfers total 50.83GB 0.02% wasted

1st.190 N[202] Q.70 h.402387 r.260056 c.0:260000:0 s.401898 d.190 E.130:402649 M.393999 L.402390 est.128 6523.452kb 0:20:49 469.377 peers.127/256 Q.(0 0)

Much much improved throughput. Now I can add the utxo dataset creation and also start validating the rest of the dataset. Something is still wrong though, as the CPU usage is only around 200% and it is lagging on emitting the bundles, but not a bad improvment

James
 

jl777

Active Member
Feb 26, 2016
279
345
Got distracted by a new method to get better throughput...

1st.9 N[202] Q.191 h.402417 r.18000 c.0:18000:0 s.268096 d.10 E.9:277192 M.25999 L.402416 est.138 0.000kb 0:03:03 96.934 peers.128/256 Q.(0 0)

1st.23 N[202] Q.175 h.402417 r.52000 c.0:52000:0 s.299562 d.26 E.26:309699 M.71999 L.402416 est.154 0.000kb 0:06:04 635.866 peers.128/256 Q.(0 65)

1st.43 N[202] Q.155 h.402418 r.92000 c.0:92000:0 s.339628 d.46 E.46:349780 M.135998 L.402416 est.175 0.000kb 0:10:17 99.456 peers.128/256 Q.(0 0)

1st.65 N[202] Q.132 h.402418 r.126000 c.0:126000:0 s.382915 d.69 E.63:393679 M.219998 L.402416 est.198 0.000kb 0:15:02 454.109 peers.128/256 Q.(0 0)

1st.78 N[202] Q.118 h.402419 r.132000 c.0:132000:0 s.399491 d.83 E.66:409749 M.269999 L.402416 est.211 0.000kb 0:17:59 752.416 peers.128/256 Q.(0 0)

400K blocks in 18 minutes! But the processing is still lagging, with only half the CPU used, however bandwidth was sustaining around 1gbps for quite a while. At least the download and first three passes of processing are not the bottleneck anymore.
 

jl777

Active Member
Feb 26, 2016
279
345
loop iters 10000, starting 32, += 4

1st.143 N[202] Q.58 h.402424 r.208000 c.0:208000:0 s.398191 d.143 E.104:409533 M.324000 L.402423 est.604 0.000kb 0:18:01 591.806 peers.128/256 Q.(0 0)

same time, but a lot more processed, especially consecutive 143 is good. getting a bit too tired to figure out the 143 -> 104 lag, i think i need to setup a new processing pipeline
[doublepost=1457844362][/doublepost]actually I think the active dataset exceeded the system ram, which forced flushing the disk cache, making it take longer to process each bundle
 

jl777

Active Member
Feb 26, 2016
279
345
1st.171 N[202] Q.25 h.402437 r.260000 c.0:260000:0 s.397487 d.171 E.130:411587 M.402436 L.402437 est.32 0.000kb 0:26:25 923.886 peers.128/256 Q.(0 2364)
 

jl777

Active Member
Feb 26, 2016
279
345
not sure if it was a lucky test result, but I added independent queues for processing incoming data vs. creating bundles. also made a verification queue and a could of state transitions in the implicit statemachine to generate the readonly utxo vector and the serialized account balances. The latter is an every growing file, with changes throughout, so I think it makes sense to just have the latest few saved

1st.159 N[202] Q.-531 h.402523 r.318001 c.0:318000:0 s.402077 d.167 E.159:468185 M.402522 L.402523 est.128 33.6MB 0:18:39 528.341 peers.127/1024 Q.(0 0)

ramchaindata have 201:523 at 0 | 15323 blocks 2.64GB redundant xfers total 60.95GB 4.33% wasted

I issue redundant requests to help complete the current bundle. not sure it is worth the tradeoff at 4% waste bandwidth. The optimizing of the throughput is a never ending process as it is always possible to make it that much better, but I am pretty happy with the results now and I need to complete the utxo dataset

CPU usage seemed to average less than 4 cores, so unless the sig validation is super slow, it might be able to fit into the available CPU idle time. The current approach appears to be completing the next bundle reasonably quickly relative to the latest bundle so my guess is that sig validation will be the bottleneck.

I added support for a separate volume to put all the tmp files via JSON parameter. This allows to make a ramdisk and put all the tmp files there and that will get rid of the slowdown from the HDD thrashing about. I see 8 pending bundle creation tasks, but only 200% CPU, which strongly indicates it is due to HDD seeks that it is not at 800% CPU. So instead of taking 10 minutes to finish the bundle creations, with SSD or RAMdisk it would be much faster, probably ~2 minutes.

1st.201 N[202] Q.-531 h.402527 r.390001 c.0:390000:0 s.402258 d.201 E.195:468387 M.402526 L.402527 est.128 33.6MB 0:30:34 31.041 peers.127/1024 Q.(0 0)

after 30 minutes 34 seconds, all blocks downloaded, mainchain verified to 402526, just lagging half a dozen bundles to be emitted. definitely HDD limited, just saw it took 3 minutes for one bundle which is at least 3x more than it would have without HDD seeks

1st.201 N[202] Q.-1238 h.402527 r.402001 c.0:402000:0 s.402258 d.201 E.201:468440 M.402526 L.402527 est.128 33.6MB 0:32:13 1.248 peers.128/1024 Q.(0 0)

so 32 minutes 13 seconds for complete sync (with just the utxo and sig validation left to weave into the process). Server with SSD or ramdisk should complete it in about 22 minutes, but that assumes you have a 1gbps connection and 8 decent CPU cores
 

jl777

Active Member
Feb 26, 2016
279
345
if minpend is above maxpend, then each time a bundle is emitted, maxpend is incremented
if below, then maxpend in decremented
maxpend determines the width of the bundles that will be actively synced, relative to 1st

i really should change to a total GB basis, but this was the easiest to get in there quick
since first 100 bundles are very small, we can probably start with max of 128 and put min down to 32

then you will get a quick burst, but it will be very slow until 1st gets to 96
but for low mem system, maybe this is required. When I get optimal values for various system configs, I can make it automatically use them
 

jl777

Active Member
Feb 26, 2016
279
345
1st.128 N[202] Q.-6117 h.402573 r.262001 c.0:262000:0 s.401056 d.139 E.131:412696 M.402572 L.402573 est.64 81.2MB 0:32:06 207.277 peers.127/1024 Q.(0 0)

It is a bit slower, but I reduced the size of the vouts data from 7.5GB to 120MB, but not sure it has all the data as that seems like quite a big reduction.

fine tuned the dataflow a bit more.

also got the spends fully validated with 6 byte coordinates, but first try waited for the previous bundle to be completed. the problem is that this serializes things and the time to complete is almost an hour! I changed it so it does just the spend vectors as soon as all the data needed is there and now it is doing full CPU saturation

I struggled a while to figure out the best format for the read-only spend info, until I realized that the majority of spends are fully resolved within the bundle due to locality of vin's spending recent vouts. Turns out 80% are not needing external reference so the total spend set is 350MB. But this is read-only data as it never changes.

With these vectors, I can create utxo bitmaps synchronized at bundle boundaries, and also while doing the vector calculations I can also update total spends by address, but that uses a bit too much memory, so I need to optimize that first.

Quite happy that 100% of vins found the txid via bitwise hash table, so that bodes well. The real test is to get total account balances and make sure it adds up to total mining output. We will get that result sometime this week.
 

jl777

Active Member
Feb 26, 2016
279
345
5 builds later...

it was a bit tricky to find the right condition to allow safe resolving of the vin txids. Initially such are just stored as full txid (yikes! no compression), but as soon as all the previous bundles are there, then all vins should be able to be resolved. At least I have confirmed if a txid cant be found, it complains a lot

With everything happening in parallel, it is easy to end up thinking a bundle is ready before it is totally ready. So I had to combine the "1st" field along with just waiting 3 seconds after a bundle is written out. That gives time for avoiding race conditions, I am seeing some cases where writing the data out to file is not completed, even though it is memory mapped. When dealing with multiple threads, it does take time to get a fully synchronized state across all threads, especially when dealing with buffered files.

I just realized with 80% of vins resolved within the bundle, it could well be worth doing the verify of those during bundle creation. Since I dont need any extra fields to denote an external reference, it wont create chances for inconsistent data.

1st.52 N[202] Q.-319 h.402586 r.128003 c.0:128000:0 s.225966 d.64 E.64:243166 M.113998 L.402584 est.64 28.0MB 0:07:58 445.930 peers.127/1024 Q.(0 0)

CPU usage is now around 600%, so a lot more load and not much extra capacity so not sure if doing sig validation will totally mess up the processing speed. With so much data avoiding another pass through would save a lot of time, but if not enough CPU it wont help.

To give an idea of how fast iguana is, even using the serial search through all the bundles, it is doing a search for hundreds of millions of vin txids on-the-fly, during heavy system load.

utxo 10177 spendinds.[49] errs.0 [14.22%] emitted.1836 10.758kb of 10177 | GENERATED UTXO for ht.98000 duration 0 seconds

above is a typical one, I benchmark it but all of the early bundles show 0 seconds as I am doing seconds level timings. I guess I should have used milliseconds. But with 1836 vins fully processed (including writing out the file), means 500 microseconds per search, but it could be a lot less as it all finished the same second it started

utxo 196742 spendinds.[80] errs.0 [21.85%] emitted.78865 462.100kb of 196742 | GENERATED UTXO for ht.160000 duration 10 seconds

got one which took 10 seconds, so it looks like about 120 microseconds per vin, from around the 40% mark. So even from the latest blocks it would be about 250 microseconds per search.

then there is:

utxo 829521 spendinds.[91] errs.0 [20.21%] emitted.80203 469.939kb of 829521 | GENERATED UTXO for ht.182000 duration 1 seconds

So maybe it is heavy system load that caused the slow performance, and the real time is closer to 20 microseconds?

1st.108 N[202] Q.-644 h.402589 r.222002 c.0:222000:0 s.337799 d.111 E.111:355686 M.314000 L.402584 est.64 28.1MB 0:20:04 187.693 peers.127/1024 Q.(0 0)

I checked the compressibilty of the utxo vectors and it compressed by over 90%, so the total compressed size would a very small 30MB, which starts to be small enough to fit in the CPU L* cache! But all that is needed is a bitmap for spent status. Depending on the distribution, using the same sparsehash table method might be best. I think 90% of unspents are spent, so that makes a bitmap pretty sparse and maybe it can be run length encoded, but that makes looking it up a bit more complicated. unless it is put into a compressed filesystem

utxo 1417963 spendinds.[109] errs.0 [14.56%] emitted.154623 905.994kb of 1417963 | GENERATED UTXO for ht.218000 duration 5 seconds

utxo 1525706 spendinds.[111] errs.0 [14.19%] emitted.170555 999.346kb of 1525706 | GENERATED UTXO for ht.222000 duration 161 seconds

clearly overall system activity has big effect on the time for searching txid. following was during 200% CPU load:

utxo 1495166 spendinds.[116] errs.0 [14.81%] emitted.256541 1503.170kb of 1495166 | GENERATED UTXO for ht.232000 duration 2 seconds. so it seems 10 microseconds per search is about right.

1st.118 N[202] Q.-1214 h.402592 r.254002 c.0:254000:0 s.396135 d.127 E.127:416280 M.402591 L.402592 est.64 28.1MB 0:33:39 116.434 peers.128/1024 Q.(0 0)

it seems that there isnt much CPU left, so I will put the sig validation in the background
 

jl777

Active Member
Feb 26, 2016
279
345
I found a config that gets most the data downloaded in 20 minutes and frontloads the high CPU load

1st.105 N[202] Q.-3612 h.402610 r.234000 c.0:234000:0 s.399861 d.117 E.117:428113 M.402609 L.402610 est.128 119.1MB 0:33:03 809.178 peers.128/1024 Q.(0 0)

it ends up about the same as above, but past the 20 minute mark, it is only using 1 core. What that means is that 7 cores are available for sig validation, which I will need to run at a lower priority than the other things so it doesnt slow anything else down. On my server, it doesnt look like 30 minutes is enough time due to the older CPU and normal HDD.
 

jl777

Active Member
Feb 26, 2016
279
345
1st.164 N[202] Q.-300 h.402623 r.348001 c.0:348000:0 s.401869 d.174 E.174:445691 M.402622 L.402623 est.32 13.8MB 0:31:50 47.367 peers.127/1024 Q.(0 0)
 

Chronos

Member
Mar 6, 2016
56
44
www.youtube.com
it seems that there isnt much CPU left
@jl777 I see a lot of comments about making sure CPU is maxed on your specific machine. However, you've mentioned that you're testing on a relatively low-end laptop. Would it be helpful for those with beefier machines to test, reporting specifically on how much of their CPU is consumed? It may be that the HDD and memory on your box are fast, while your CPU is lower-end.
 

jl777

Active Member
Feb 26, 2016
279
345
the more datapoints I have, the better. right now I test on 1.2Ghz i5, 4GB RAM and 128GB SSD laptop and also on a 32GB RAM, normal HDD 8 core VPS server.

The current dataset compressed to 17GB (without sigs)

So my target of 20GB appears to be pretty close as there are only address balance field and a few small ones left to add. But currently a lot more space is needed during sync, ~100GB

At some point I will have time to optimize this
 
Last edited:
  • Like
Reactions: grewalsatinder

jl777

Active Member
Feb 26, 2016
279
345
I am now generating the utxo dataset, so ready to test account balance calculations and listunspent. I sped things up more than the slowdown caused by calculating the utxo data.

2000 1st.106-168 N[202] Q.128 h.402764 r.238000 c.0.000kb s.307892 d.119 E.119:344543 M.87999 L.402762 est.50 3129.211kb 0:09:41 146.717 peers.128/1024 Q.(0 1277)

2000 1st.153-201 N[202] Q.124 h.402765 r.308000 c.0.000kb s.400266 d.154 E.154:437609 M.402762 L.402763 est.50 3129.211kb 0:26:53 543.825 peers.128/1024 Q.(0 1565)
 

grewalsatinder

New Member
Mar 3, 2016
10
7
@jl777 I see a lot of comments about making sure CPU is maxed on your specific machine. However, you've mentioned that you're testing on a relatively low-end laptop. Would it be helpful for those with beefier machines to test, reporting specifically on how much of their CPU is consumed? It may be that the HDD and memory on your box are fast, while your CPU is lower-end.
As I understand for low end machines like IoT devices and Mobile devices there'll be option to download the data using torrents. This will help downloading blockchain bundles faster, and with less CPU/RAM required. And this all happening in parallel to latest data being synced from Blockchain network. If I get that correctly, later CPU/RAM usage won't be much and iguana can run easily as full node on IoT devices and Mobile devices.
 

Members online