You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jan 8, 2026. It is now read-only.
Copy file name to clipboardExpand all lines: schema-layer/data-structures/hashmap.md
+2-3Lines changed: 2 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -117,7 +117,7 @@ type Element union {
117
117
type Bucket list [ BucketEntry ]
118
118
119
119
type BucketEntry struct {
120
-
key Bytes
120
+
key String
121
121
value Value (implicit "null")
122
122
} representation tuple
123
123
@@ -138,7 +138,6 @@ Notes:
138
138
*`hashAlg` in the root block is a string identifier for a hash algorithm. The identifier should correspond to a [multihash](https://github.com/multiformats/multihash) identifier as found in the [multiformats table](https://github.com/multiformats/multicodec/blob/master/table.csv).
139
139
*`bitWidth` in the root block should be at least `3`.
140
140
*`bucketSize` in the root block must be at least `1`.
141
-
* Keys are stored in `Byte` form.
142
141
* The size of `map` is determined by `bitWidth` since it holds one bit per possible data element. It must be `1` or `2`<sup>`bitWidth`</sup>` / 8` bytes long, whichever is largest.
143
142
144
143
## Algorithm in detail
@@ -181,7 +180,7 @@ Notes:
181
180
3. Proceed to create new CIDs for the current block and each parent as per step **6.c**. until we have a new root block and its CID.
182
181
3. If the `dataIndex` element of `data` contains a bucket (array) and the bucket's size is `bucketSize`:
183
182
1. Create a new empty node
184
-
2. For each element of the bucket, perform a `Set(key, value)` on the new empty node with a `depth` set to `depth + 1`, proceeding from step **2**. This should create a new node with `bucketSize` elements distributed approximately evenly through its `data` array. This operation will only result in more than one new node being created if all `key`s being set have the same `bitWidth` bits of their hashes at `bitWidth` position `depth + 1` (and so on). A sufficiently random hash algorithm should prevent this from occuring.
183
+
2. For each element of the bucket, perform a `Set(key, value)` on the new empty node with a `depth` set to `depth + 1`, proceeding from step **2**. This should create a new node with `bucketSize` elements distributed approximately evenly through its `data` array. This operation will only result in more than one new node being created if all `key`s being set have the same `bitWidth` bits of their hashes at `bitWidth` position `depth + 1` (and so on). A sufficiently random hash algorithm should prevent this from occurring.
185
184
3. Create a CID for the new child node.
186
185
4. Mutate the current node (create a copy)
187
186
5. Replace `dataIndex` of `data` with a link to the new child node.
0 commit comments