- Creating the table from scratch took 30703 ms, just over half a minute.
- Looking up 100 random records by ID took 313 ms thanks to binary search
- Modifying the hash and offset of the same 100 random records took 109 ms. This result is misleading since the benchmarking code consists of recording the time before the loop, and working out the difference after. So all the printing function the lookup loop does contributes to the time. In reality I expect lookup to be around 50ms.
- Deleting the same 100 records took 609 ms, again printing was done for debugging purposes, actual time would be around the same as modification.
- Surprisingly inserting 100 records with new hash and offsets only took 188 ms, despite the fact the internal linklist is updated and debugging output was printed. File size did not change before and after delete/insert operations, this combined with debugging output indicates deleted records are being fully re-used.
Now for a little doom3, then its off to write the datastore and vessle code.