Compare commits

...

9 Commits

Author SHA1 Message Date
Ed_
a0ddc3c26e minor misc (end of day stuff) 2025-10-21 23:21:07 -04:00
Ed_
2303866c81 code2/grime progress 2025-10-21 22:57:23 -04:00
Ed_
96c6d58ea0 Progress on code2/grime allocators 2025-10-21 22:10:48 -04:00
Ed_
f63b52f910 curate fixed stack 2025-10-21 22:10:23 -04:00
Ed_
6d5215ac1e Make ensures/verifies in Array asserts 2025-10-21 22:08:29 -04:00
Ed_
1e18592ff5 thinking about key tables... 2025-10-21 22:07:55 -04:00
Ed_
43141183a6 wip messing around with adding jai flavored hash/key table. 2025-10-20 12:51:29 -04:00
Ed_
0607d81f70 ignore .idea 2025-10-18 20:47:49 -04:00
Ed_
58ba273dd1 code2: initial curation of virtual arena 2025-10-18 20:46:06 -04:00
30 changed files with 985 additions and 343 deletions

1
.gitignore vendored
View File

@@ -35,3 +35,4 @@ ols.json
*.spall
sectr.user
sectr.proj
.idea

View File

@@ -2,7 +2,10 @@
This prototype aims to flesh out ideas I've wanted to explore futher on code editing & related tooling.
The things to explore:
Current goal with the prototype is just making a good visualizer & note aggregation for codebases & libraries.
My note repos with affine links give an idea of what that would look like.
The things to explore (future):
* 2D canvas for laying out code visualized in various types of ASTs
* WYSIWYG frontend ASTs
@@ -28,55 +31,14 @@ The dependencies are:
* [sokol-odin (Sectr Fork)](https://github.com/Ed94/sokol-odin)
* [sokol-tools](https://github.com/floooh/sokol-tools)
* Powershell (if you want to use my build scripts)
* backtrace (not used yet)
* freetype (not used yet)
* Eventually some config parser (maybe I'll use metadesk, or [ini](https://github.com/laytan/odin-ini-parser))
The project is so far in a "codebase boostrapping" phase. Most the work being done right now is setting up high performance linear zoom rendering for text and UI.
Text has recently hit sufficient peformance targets, and now inital UX has become the focus.
The project's is organized into 2 runtime modules sectr_host & sectr.
The host module loads the main module & its memory. Hot-reloading it's dll when it detects a change.
Codebase organization:
* App: General app config, state, and operations.
* Engine: client interface for host, tick, update, rendering.
* Has the following definitions: startup, shutdown, reload, tick, clean_frame (which host hooks up to when managing the client dll)
* Will handle async ops.
* Font Provider: Manages fonts.
* Bulk of implementation maintained as a separate library: [VEFontCache-Odin](https://github.com/Ed94/VEFontCache-Odin)
* Grime: Name speaks for itself, stuff not directly related to the target features to iterate upon for the prototype.
* Defining dependency aliases or procedure overload tables, rolling own allocator, data structures, etc.
* Input: All human input related features
* Base input features (polling & related) are platform abstracted from sokol_app
* Entirely user rebindable
* Math: The usual for 2D/3D.
* Parsers:
* AST generation, editing, and serialization.
* Parsers for different levels of "synatitic & semantic awareness", Formatting -> Domain Specific AST
* Figure out pragmatic transformations between ASTs.
* Project: Encpasulation of user config/context/state separate from persistent app's
* Manages the codebase (database & model view controller)
* Manages workspaces : View compositions of the codebase
* UI: Core graphic user interface framework, AST visualzation & editing, backend visualization
* PIMGUI (Persistent Immediate Mode User Interface)
* Auto-layout
* Supports heavy procedural generation of box widgets
* Viewports
* Docking/Tiling, Floating, Canvas
Due to the nature of the prototype there are 'sub-groups' such as the codebase being its own ordeal as well as the workspace.
They'll be elaborated in their own documentation
## Gallery
![img](docs/assets/sectr_host_2024-03-09_04-30-27.png)
![img](docs/assets/sectr_host_2024-05-04_12-29-39.png)
![img](docs/assets/Code_2024-05-04_12-55-53.png)
![img](docs/assets/sectr_host_2024-05-11_22-34-15.png)
![img](docs/assets/sectr_host_2024-05-15_03-32-36.png)
![img](docs/assets/Code_2024-05-21_23-15-16.gif)
## Notes

View File

@@ -115,8 +115,8 @@ AllocatorInfo :: struct {
// Listing of every single allocator (used on hot-reloadable builds)
AllocatorProcID :: enum uintptr {
FArena,
// VArena,
// CArena,
VArena,
Arena,
// Pool,
// Slab,
// Odin_Arena,
@@ -127,8 +127,8 @@ resolve_allocator_proc :: #force_inline proc "contextless" (procedure: $Allocato
when ODIN_DEBUG {
switch (transmute(AllocatorProcID)procedure) {
case .FArena: return farena_allocator_proc
// case .VArena: return varena_allocaotr_proc
// case .CArena: return carena_allocator_proc
case .VArena: return varena_allocator_proc
case .Arena: return arena_allocator_proc
// case .Pool: return pool_allocator_proc
// case .Slab: return slab_allocator_proc
// case .Odin_Arena: return odin_arena_allocator_proc
@@ -145,8 +145,8 @@ resolve_odin_allocator :: #force_inline proc "contextless" (allocator: Odin_Allo
when ODIN_DEBUG {
switch (transmute(AllocatorProcID)allocator.procedure) {
case .FArena: return { farena_odin_allocator_proc, allocator.data }
// case .VArena: return { varena_odin_allocaotr_proc, allocator.data }
// case .CArena: return { carena_odin_allocator_proc, allocator.data }
case .VArena: return { varena_odin_allocator_proc, allocator.data }
case .Arena: return { arena_odin_allocator_proc, allocator.data }
// case .Pool: return nil // pool_allocator_proc
// case .Slab: return nil // slab_allocator_proc
// case .Odin_Arena: return nil // odin_arena_allocator_proc
@@ -157,7 +157,7 @@ resolve_odin_allocator :: #force_inline proc "contextless" (allocator: Odin_Allo
switch (allocator.procedure) {
case farena_allocator_proc: return { farena_odin_allocator_proc, allocator.data }
case varena_allocator_proc: return { varena_odin_allocator_proc, allocator.data }
case carena_allocator_proc: return { carena_odin_allocator_proc, allocator.data }
case arena_allocator_proc: return { arena_odin_allocator_proc, allocator.data }
}
}
panic_contextless("Unresolvable procedure")
@@ -177,6 +177,7 @@ odin_allocator_mode_to_allocator_op :: #force_inline proc "contextless" (mode: O
panic_contextless("Impossible path")
}
// TODO(Ed): Change to DEFAULT_ALIGNMENT
MEMORY_ALIGNMENT_DEFAULT :: 2 * size_of(rawptr)
allocatorinfo :: #force_inline proc(ainfo := context.allocator) -> AllocatorInfo { return transmute(AllocatorInfo) ainfo }

View File

@@ -1,6 +1,6 @@
package grime
// Below should be defined per-package
// TODO(Ed): Below should be defined per-package?
ensure :: #force_inline proc(condition: bool, msg: string, location := #caller_location) -> bool {
if condition do return true

View File

@@ -128,7 +128,7 @@ array_append_value :: proc(self: ^Array($Type), value: Type) -> AllocatorError {
// Asumes non-overlapping for items.
array_append_at_slice :: proc(self : ^Array($Type ), items: []Type, id: int) -> AllocatorError {
ensure(id < self.num, "Why are we doing an append at beyond the bounds of the current element count")
assert(id < self.num, "Why are we doing an append at beyond the bounds of the current element count")
id := id
if id >= self.num { return array_append_slice(items) }
if len(items) > self.capacity {
@@ -143,7 +143,7 @@ array_append_at_slice :: proc(self : ^Array($Type ), items: []Type, id: int) ->
return AllocatorError.None
}
array_append_at_value :: proc(self: ^Array($Type), item: Type, id: int) -> AllocatorError {
ensure(id < self.num, "Why are we doing an append at beyond the bounds of the current element count")
assert(id < self.num, "Why are we doing an append at beyond the bounds of the current element count")
id := id; {
// TODO(Ed): Not sure I want this...
if id >= self.num do id = self.num
@@ -167,8 +167,8 @@ array_clear :: #force_inline proc "contextless" (self: Array($Type), zero_data:
}
array_fill :: proc(self: Array($Type), begin, end: u64, value: Type) -> bool {
ensure(end - begin <= num)
ensure(end <= num)
assert(end - begin <= num)
assert(end <= num)
if (end - begin > num) || (end > num) do return false
mem_fill(data[begin:], value, end - begin)
return true
@@ -183,7 +183,7 @@ array_push_back :: #force_inline proc "contextless" (self: Array($Type)) -> bool
}
array_remove_at :: proc(self: Array($Type), id: int) {
verify( id < self.num, "Attempted to remove from an index larger than the array" )
assert( id < self.num, "Attempted to remove from an index larger than the array" )
mem_copy(self.data[id:], self.data[id + 1:], (self.num - id) * size_of(Type))
self.num -= 1
}

View File

@@ -1,7 +1,7 @@
package grime
// TODO(Ed): Review when os2 is done.
// TODO(Ed): Make an async option...
// TODO(Ed): Make an async option?
file_copy_sync :: proc( path_src, path_dst: string, allocator := context.allocator ) -> b32
{
file_size : i64

View File

@@ -0,0 +1,29 @@
package grime
FStack :: struct ($Type: typeid, $Size: u32) {
items: [Size]Type,
idx: u32,
}
stack_clear :: #force_inline proc "contextless" (stack: ^FStack($Type, $Size)) { stack.idx = 0 }
stack_push :: #force_inline proc "contextless" (stack: ^FStack($Type, $Size ), value: Type) {
assert_contextless(stack.idx < u32(len( stack.items )), "Attempted to push on a full stack")
stack.items[stack.idx] = value
stack.idx += 1
}
stack_pop :: #force_inline proc "contextless" (stack: ^FStack($Type, $Size)) {
assert(stack.idx > 0, "Attempted to pop an empty stack")
stack.idx -= 1
if stack.idx == 0 {
stack.items[stack.idx] = {}
}
}
stack_peek_ref :: #force_inline proc "contextless" (s: ^FStack($Type, $Size)) -> (^Type) {
return & s.items[/*last_idx*/ max( 0, s.idx - 1 )]
}
stack_peek :: #force_inline proc "contextless" (s: ^FStack($Type, $Size)) -> Type {
return s.items[/*last_idx*/ max( 0, s.idx - 1 )]
}
stack_push_contextless :: #force_inline proc "contextless" (s: ^FStack($Type, $Size), value: Type) {
s.items[s.idx] = value
s.idx += 1
}

View File

@@ -1,9 +1,20 @@
package grime
hash32_djb8 :: #force_inline proc "contextless" ( hash : ^u32, bytes : []byte ) {
hash32_djb8 :: #force_inline proc "contextless" (hash: ^u32, bytes: []byte ) {
for value in bytes do (hash^) = (( (hash^) << 8) + (hash^) ) + u32(value)
}
hash64_djb8 :: #force_inline proc "contextless" ( hash : ^u64, bytes : []byte ) {
hash64_djb8 :: #force_inline proc "contextless" (hash: ^u64, bytes: []byte ) {
for value in bytes do (hash^) = (( (hash^) << 8) + (hash^) ) + u64(value)
}
// Ripped from core:hash, fnv32a
@(optimization_mode="favor_size")
hash32_fnv1a :: #force_inline proc "contextless" (hash: ^u32, data: []byte, seed := u32(0x811c9dc5)) {
hash^ = seed; for b in data { hash^ = (hash^ ~ u32(b)) * 0x01000193 }
}
// Ripped from core:hash, fnv64a
@(optimization_mode="favor_size")
hash64_fnv1a :: #force_inline proc "contextless" (hash: ^u64, data: []byte, seed := u64(0xcbf29ce484222325)) {
hash^ = seed; for b in data { hash^ = (hash^ ~ u64(b)) * 0x100000001b3 }
}

View File

@@ -1,164 +0,0 @@
package grime
import "base:intrinsics"
/*
Key Table 1-Layer Chained-Chunked-Cells
*/
KT1CX_Slot :: struct($type: typeid) {
value: type,
key: u64,
occupied: b32,
}
KT1CX_Cell :: struct($type: typeid, $depth: int) {
slots: [depth]KT1CX_Slot(type),
next: ^KT1CX_Cell(type, depth),
}
KT1CX :: struct($cell: typeid) {
table: []cell,
}
KT1CX_Byte_Slot :: struct {
key: u64,
occupied: b32,
}
KT1CX_Byte_Cell :: struct {
next: ^byte,
}
KT1CX_Byte :: struct {
table: []byte,
}
KT1CX_ByteMeta :: struct {
slot_size: int,
slot_key_offset: uintptr,
cell_next_offset: uintptr,
cell_depth: int,
cell_size: int,
type_width: int,
type: typeid,
}
KT1CX_InfoMeta :: struct {
table_size: int,
slot_size: int,
slot_key_offset: uintptr,
cell_next_offset: uintptr,
cell_depth: int,
cell_size: int,
type_width: int,
type: typeid,
}
KT1CX_Info :: struct {
backing_table: AllocatorInfo,
}
kt1cx_init :: proc(info: KT1CX_Info, m: KT1CX_InfoMeta, result: ^KT1CX_Byte) {
assert(result != nil)
assert(info.backing_table.procedure != nil)
assert(m.cell_depth > 0)
assert(m.table_size >= 4 * Kilo)
assert(m.type_width > 0)
table_raw, error := mem_alloc(m.table_size * m.cell_size, ainfo = allocator(info.backing_table))
assert(error == .None); slice_assert(transmute([]byte) table_raw)
(transmute(^SliceByte) & table_raw).len = m.table_size
result.table = table_raw
}
kt1cx_clear :: proc(kt: KT1CX_Byte, m: KT1CX_ByteMeta) {
cell_cursor := cursor(kt.table)
table_len := len(kt.table) * m.cell_size
for ; cell_cursor != end(kt.table); cell_cursor = cell_cursor[m.cell_size:] // for cell, cell_id in kt.table.cells
{
slots := SliceByte { cell_cursor, m.cell_depth * m.slot_size } // slots = cell.slots
slot_cursor := slots.data
for;; {
slot := slice(slot_cursor, m.slot_size) // slot = slots[slot_id]
zero(slot) // slot = {}
if slot_cursor == end(slots) { // if slot == end(slot)
next := slot_cursor[m.cell_next_offset:] // next = kt.table.cells[cell_id + 1]
if next != nil { // if next != nil
slots.data = next // slots = next.slots
slot_cursor = next
continue
}
}
slot_cursor = slot_cursor[m.slot_size:] // slot = slots[slot_id + 1]
}
}
}
kt1cx_slot_id :: proc(kt: KT1CX_Byte, key: u64, m: KT1CX_ByteMeta) -> u64 {
cell_size := m.cell_size // dummy value
hash_index := key % u64(len(kt.table))
return hash_index
}
kt1cx_get :: proc(kt: KT1CX_Byte, key: u64, m: KT1CX_ByteMeta) -> ^byte {
hash_index := kt1cx_slot_id(kt, key, m)
cell_offset := uintptr(hash_index) * uintptr(m.cell_size)
cell_cursor := cursor(kt.table)[cell_offset:] // cell_id = 0
{
slots := slice(cell_cursor, m.cell_depth * m.slot_size) // slots = cell[cell_id].slots
slot_cursor := cell_cursor // slot_id = 0
for;;
{
slot := transmute(^KT1CX_Byte_Slot) slot_cursor[m.slot_key_offset:] // slot = cell[slot_id]
if slot.occupied && slot.key == key {
return cast(^byte) slot_cursor
}
if slot_cursor == end(slots)
{
cell_next := cell_cursor[m.cell_next_offset:] // cell.next
if cell_next != nil {
slots = slice(cell_next, len(slots)) // slots = cell.next
slot_cursor = cell_next
cell_cursor = cell_next // cell = cell.next
continue
}
else {
return nil
}
}
slot_cursor = slot_cursor[m.slot_size:]
}
}
}
kt1cx_set :: proc(kt: KT1CX_Byte, key: u64, value: []byte, backing_cells: Odin_Allocator, m: KT1CX_ByteMeta) -> ^byte {
hash_index := kt1cx_slot_id(kt, key, m)
cell_offset := uintptr(hash_index) * uintptr(m.cell_size)
cell_cursor := cursor(kt.table)[cell_offset:] // KT1CX_Cell(Type) cell = kt.table[hash_index]
{
slots := SliceByte {cell_cursor, m.cell_depth * m.slot_size} // cell.slots
slot_cursor := slots.data
for ;;
{
slot := transmute(^KT1CX_Byte_Slot) slot_cursor[m.slot_key_offset:]
if slot.occupied == false {
slot.occupied = true
slot.key = key
return cast(^byte) slot_cursor
}
else if slot.key == key {
return cast(^byte) slot_cursor
}
if slot_cursor == end(slots) {
curr_cell := transmute(^KT1CX_Byte_Cell) (uintptr(cell_cursor) + m.cell_next_offset) // curr_cell = cell
if curr_cell != nil {
slots.data = curr_cell.next
slot_cursor = curr_cell.next
cell_cursor = curr_cell.next
continue
}
else {
new_cell, _ := mem_alloc(m.cell_size, ainfo = backing_cells)
curr_cell.next = raw_data(new_cell)
slot = transmute(^KT1CX_Byte_Slot) cursor(new_cell)[m.slot_key_offset:]
slot.occupied = true
slot.key = key
return raw_data(new_cell)
}
}
slot_cursor = slot_cursor[m.slot_size:]
}
return nil
}
}
kt1cx_assert :: proc(kt: $type / KT1CX) {
slice_assert(kt.table)
}
kt1cx_byte :: proc(kt: $type / KT1CX) -> KT1CX_Byte { return { slice( transmute([^]byte) cursor(kt.table), len(kt.table)) } }

View File

@@ -1,48 +0,0 @@
package grime
/*
Key Table 1-Layer Linear (KT1L)
*/
KT1L_Slot :: struct($Type: typeid) {
key: u64,
value: Type,
}
KT1L_Meta :: struct {
slot_size: uintptr,
kt_value_offset: uintptr,
type_width: uintptr,
type: typeid,
}
kt1l_populate_slice_a2_Slice_Byte :: proc(kt: ^[]byte, backing: AllocatorInfo, values: []byte, num_values: int, m: KT1L_Meta) {
assert(kt != nil)
if num_values == 0 { return }
table_size_bytes := num_values * int(m.slot_size)
kt^, _ = mem_alloc(table_size_bytes, ainfo = transmute(Odin_Allocator) backing)
slice_assert(kt ^)
kt_raw : SliceByte = transmute(SliceByte) kt^
for id in 0 ..< cast(uintptr) num_values {
slot_offset := id * m.slot_size // slot id
slot_cursor := kt_raw.data[slot_offset:] // slots[id] type: KT1L_<Type>
// slot_key := transmute(^u64) slot_cursor // slots[id].key type: U64
// slot_value := slice(slot_cursor[m.kt_value_offset:], m.type_width) // slots[id].value type: <Type>
a2_offset := id * m.type_width * 2 // a2 entry id
a2_cursor := cursor(values)[a2_offset:] // a2_entries[id] type: A2_<Type>
// a2_key := (transmute(^[]byte) a2_cursor) ^ // a2_entries[id].key type: <Type>
// a2_value := slice(a2_cursor[m.type_width:], m.type_width) // a2_entries[id].value type: <Type>
mem_copy_non_overlapping(slot_cursor[m.kt_value_offset:], a2_cursor[m.type_width:], cast(int) m.type_width) // slots[id].value = a2_entries[id].value
(transmute([^]u64) slot_cursor)[0] = 0;
hash64_djb8(transmute(^u64) slot_cursor, (transmute(^[]byte) a2_cursor) ^) // slots[id].key = hash64_djb8(a2_entries[id].key)
}
kt_raw.len = num_values
}
kt1l_populate_slice_a2 :: proc($Type: typeid, kt: ^[]KT1L_Slot(Type), backing: AllocatorInfo, values: [][2]Type) {
assert(kt != nil)
values_bytes := slice(transmute([^]u8) raw_data(values), len(values) * size_of([2]Type))
kt1l_populate_slice_a2_Slice_Byte(transmute(^[]byte) kt, backing, values_bytes, len(values), {
slot_size = size_of(KT1L_Slot(Type)),
kt_value_offset = offset_of(KT1L_Slot(Type), value),
type_width = size_of(Type),
type = Type,
})
}

View File

@@ -0,0 +1,196 @@
package grime
import "base:intrinsics"
/*
Key Table Chained-Chunked-Cells
Table has a cell with a user-specified depth. Each cell will be a linear search if the first slot is occupied.
Table allocated cells are looked up by hash.
If a cell is exhausted additional are allocated singly-chained reporting to the user when it does with a "cell_overflow" counter.
Slots track occupacy with a tombstone (occupied signal).
If the table ever needs to change its size, it should be a wipe and full traversal of the arena holding the values..
or maybe a wipe of that arena as it may no longer be accessible.
Has a likely-hood of having cache misses (based on reading other impls about these kind of tables).
Odin's hash-map or Jai's are designed with open-addressing and prevent that.
Intended to be wrapped in parent interface (such as a string cache). Keys are hashed by the table's user.
The table is not intended to directly store the type's value in it's slots (expects the slot value to be some sort of reference).
The value should be stored in an arena.
Could be upgraded two a X-layer, not sure if its ever viable.
Would essentially be segmenting the hash to address a multi-layered table lookup.
Where one table leads to another hash resolving id for a subtable with linear search of cells after.
*/
KTCX_Slot :: struct($type: typeid) {
value: type,
key: u64,
occupied: b32,
}
KTCX_Cell :: struct($type: typeid, $depth: int) {
slots: [depth]KTCX_Slot(type),
next: ^KTCX_Cell(type, depth),
}
KTCX :: struct($cell: typeid) {
table: []cell,
cell_overflow: int,
}
KTCX_Byte_Slot :: struct {
key: u64,
occupied: b32,
}
KTCX_Byte_Cell :: struct {
next: ^byte,
}
KTCX_Byte :: struct {
table: []byte,
cell_overflow: int,
}
KTCX_ByteMeta :: struct {
slot_size: int,
slot_key_offset: uintptr,
cell_next_offset: uintptr,
cell_depth: int,
cell_size: int,
type_width: int,
type: typeid,
}
KTCX_Info :: struct {
table_size: int,
slot_size: int,
slot_key_offset: uintptr,
cell_next_offset: uintptr,
cell_depth: int,
cell_size: int,
type_width: int,
type: typeid,
}
ktcx_byte :: #force_inline proc "contextless" (kt: $type / KTCX) -> KTCX_Byte { return { slice( transmute([^]byte) cursor(kt.table), len(kt.table)) } }
ktcx_init_byte :: proc(result: ^KTCX_Byte, tbl_backing: Odin_Allocator, m: KTCX_Info) {
assert(result != nil)
assert(tbl_backing.procedure != nil)
assert(m.cell_depth > 0)
assert(m.table_size >= 4 * Kilo)
assert(m.type_width > 0)
table_raw, error := mem_alloc(m.table_size * m.cell_size, ainfo = tbl_backing)
assert(error == .None); slice_assert(transmute([]byte) table_raw)
(transmute(^SliceByte) & table_raw).len = m.table_size
result.table = table_raw
}
ktcx_clear :: proc(kt: KTCX_Byte, m: KTCX_ByteMeta) {
cell_cursor := cursor(kt.table)
table_len := len(kt.table) * m.cell_size
for ; cell_cursor != end(kt.table); cell_cursor = cell_cursor[m.cell_size:] // for cell, cell_id in kt.table.cells
{
slots := SliceByte { cell_cursor, m.cell_depth * m.slot_size } // slots = cell.slots
slot_cursor := slots.data
for;; {
slot := slice(slot_cursor, m.slot_size) // slot = slots[slot_id]
zero(slot) // slot = {}
if slot_cursor == end(slots) { // if slot == end(slot)
next := slot_cursor[m.cell_next_offset:] // next = kt.table.cells[cell_id + 1]
if next != nil { // if next != nil
slots.data = next // slots = next.slots
slot_cursor = next
continue
}
}
slot_cursor = slot_cursor[m.slot_size:] // slot = slots[slot_id + 1]
}
}
}
ktcx_slot_id :: #force_inline proc "contextless" (table: []byte, key: u64) -> u64 {
return key % u64(len(table))
}
ktcx_get :: proc(kt: KTCX_Byte, key: u64, m: KTCX_ByteMeta) -> ^byte {
hash_index := key % u64(len(kt.table)) // ktcx_slot_id
cell_offset := uintptr(hash_index) * uintptr(m.cell_size)
cell_cursor := cursor(kt.table)[cell_offset:] // cell_id = 0
{
slots := slice(cell_cursor, m.cell_depth * m.slot_size) // slots = cell[cell_id].slots
slot_cursor := cell_cursor // slot_id = 0
for;;
{
slot := transmute(^KTCX_Byte_Slot) slot_cursor[m.slot_key_offset:] // slot = cell[slot_id]
if slot.occupied && slot.key == key {
return cast(^byte) slot_cursor
}
if slot_cursor == end(slots)
{
cell_next := cell_cursor[m.cell_next_offset:] // cell.next
if cell_next != nil {
slots = slice(cell_next, len(slots)) // slots = cell.next
slot_cursor = cell_next
cell_cursor = cell_next // cell = cell.next
continue
}
else {
return nil
}
}
slot_cursor = slot_cursor[m.slot_size:]
}
}
}
ktcx_set :: proc(kt: ^KTCX_Byte, key: u64, value: []byte, backing_cells: Odin_Allocator, m: KTCX_ByteMeta) -> ^byte {
hash_index := key % u64(len(kt.table)) // ktcx_slot_id
cell_offset := uintptr(hash_index) * uintptr(m.cell_size)
cell_cursor := cursor(kt.table)[cell_offset:] // KTCX_Cell(Type) cell = kt.table[hash_index]
{
slots := SliceByte {cell_cursor, m.cell_depth * m.slot_size} // cell.slots
slot_cursor := slots.data
for ;;
{
slot := transmute(^KTCX_Byte_Slot) slot_cursor[m.slot_key_offset:]
if slot.occupied == false {
slot.occupied = true
slot.key = key
return cast(^byte) slot_cursor
}
else if slot.key == key {
return cast(^byte) slot_cursor
}
if slot_cursor == end(slots) {
curr_cell := transmute(^KTCX_Byte_Cell) (uintptr(cell_cursor) + m.cell_next_offset) // curr_cell = cell
if curr_cell != nil {
slots.data = curr_cell.next
slot_cursor = curr_cell.next
cell_cursor = curr_cell.next
continue
}
else {
ensure(false, "Exhausted a cell. Increase the table size?")
new_cell, _ := mem_alloc(m.cell_size, ainfo = backing_cells)
curr_cell.next = raw_data(new_cell)
slot = transmute(^KTCX_Byte_Slot) cursor(new_cell)[m.slot_key_offset:]
slot.occupied = true
slot.key = key
kt.cell_overflow += 1
return raw_data(new_cell)
}
}
slot_cursor = slot_cursor[m.slot_size:]
}
return nil
}
}
// Type aware wrappers
ktcx_init :: #force_inline proc(table_size: int, tbl_backing: Odin_Allocator,
kt: ^$kt_type / KTCX(KTCX_Cell(KTCX_Slot($Type), $Depth))
){
ktcx_init_byte(transmute(^KTCX_Byte) kt, tbl_backing, {
table_size = table_size,
slot_size = size_of(KTCX_Slot(Type)),
slot_key_offset = offset_of(KTCX_Slot(Type), key),
cell_next_offset = offset_of(KTCX_Cell(Type, Depth), next),
cell_depth = Depth,
cell_size = size_of(KTCX_Cell(Type, Depth)),
type_width = size_of(Type),
type = Type,
})
}

View File

@@ -0,0 +1,37 @@
package grime
/*
Key Table 1-Layer Linear (KT1L)
Mainly intended for doing linear lookup of key-paried values. IE: Arg value parsing with label ids.
The table is built in one go from the key-value pairs. The default populate slice_a2 has the key and value as the same type.
*/
KTL_Slot :: struct($Type: typeid) {
key: u64,
value: Type,
}
KTL_Meta :: struct {
slot_size: int,
kt_value_offset: int,
type_width: int,
type: typeid,
}
ktl_get :: #force_inline proc "contextless" (kt: []KTL_Slot($Type), key: u64) -> ^Type {
for & slot in kt { if key == slot.key do return & slot.value; }
return nil
}
// Unique populator for key-value pair strings
ktl_populate_slice_a2_str :: #force_inline proc(kt: ^[]KTL_Slot(string), backing: Odin_Allocator, values: [][2]string) {
assert(kt != nil)
if len(values) == 0 { return }
raw_bytes, error := mem_alloc(size_of(KTL_Slot(string)) * len(values), ainfo = backing); assert(error == .None);
kt^ = slice( transmute([^]KTL_Slot(string)) cursor(raw_bytes), len(raw_bytes) / size_of(KTL_Slot(string)) )
for id in 0 ..< len(values) {
mem_copy_non_overlapping(& kt[id].value, & values[id][1], size_of(string))
hash64_fnv1a(& kt[id].key, transmute([]byte) values[id][0])
}
}

View File

@@ -0,0 +1,142 @@
package grime
/*
Hash Table based on John's Jai & Sean Barrett's
I don't like the table definition cntaining
the allocator, hash or compare procedure to be used.
So it has been stripped and instead applied on procedure site,
the parent container or is responsible for tracking that.
TODO(Ed): Resolve appropriate Key-Table term for it.
TODO(Ed): Complete this later if we actually want something beyond KT1CX or Odin's map.
*/
KT_Slot :: struct(
$TypeHash: typeid,
$TypeKey: typeid,
$TypeValue: typeid
) {
hash: TypeHash,
key: TypeKey,
value: TypeValue,
}
KT :: struct($KT_Slot: typeid) {
load_factor_perent: int,
count: int,
allocated: int,
slots_filled: int,
slots: []KT_Slot,
}
KT_Info :: struct {
key_width: int,
value_width: int,
slot_width: int,
}
KT_Opaque :: struct {
count: int,
allocated: int,
slots_filled: int,
slots: []byte,
}
KT_ByteMeta :: struct {
hash_width: int,
value_width: int,
}
KT_COUNT_COLLISIONS :: #config(KT_COUNT_COLLISIONS, false)
KT_HASH_NEVER_OCCUPIED :: 0
KT_HASH_REMOVED :: 1
KT_HASH_FIRST_VALID :: 2
KT_LOAD_FACTOR_PERCENT :: 70
kt_byte_init :: proc(info: KT_Info, tbl_allocator: Odin_Allocator, kt: ^KT_Opaque, $HashType: typeid)
{
#assert(size_of(HashType) >= 32)
assert(tbl_allocator.procedure != nil)
assert(info.value_width >= 32)
assert(info.slot_width >= 64)
}
kt_deinit :: proc(table: ^$KT / typeid, allocator: Odin_Allocator)
{
}
kt_walk_table_body_proc :: #type proc($TypeHash: typeid, hash: TypeHash, kt: ^KT_Opaque, info: KT_Info, id: TypeHash) -> (should_break: bool)
kt_walk_table :: proc($TypeHash: typeid, hash: TypeHash, kt: ^KT_Opaque, info: KT_Info, $walk_body: kt_walk_table_body_proc) -> (index: TypeHash)
{
mask := cast(TypeHash)(kt.allocated - 1) // Cast may truncate
if hash < KT_HASH_FIRST_VALID do hash += KT_HASH_FIRST_VALID
index : TypeHash = hash & mask
probe_increment: TypeHash = 1
for id := transmute(TypeHash) kt.slots[info.slot_width * index:]; id != 0;
{
if #force_inline walk_body(hash, kt, info, id) do break
index = (index + probe_increment) & mask
probe_increment += 1
}
}
// Will not expand table if capacity reached, user must do that check beforehand.
// Will return existing if hash found
kt_byte_add :: proc(value: [^]byte, key: [^]byte, hash: $TypeHash, kt: ^KT_Opaque, info: KT_Info)-> [^]byte
{
aasert(kt.slots_filled, kt.allocated)
index := #force_inline kt_walk_table(hash, kt, info,
proc(hash: $TypeHash, kt: ^KT_Opaque, info: KT_Info, id: TypeHash) -> (should_break: bool)
{
if id == KT_HASH_REMOVED {
kt.slots_filled -= 1
should_break = true
return
}
//TODO(Ed): Add collision tracking
return
})
kt.count += 1
kt.slots_filled += 1
slot_offset := info.slot_width * index
entry := table.slots[info.slot_width * index:]
mem_copy_non_overlapping(entry, hash, size_of(TypeHash))
mem_copy_non_overlapping(entry[size_of(hash):], key, info.key_width)
mem_copy_non_overlapping(entry[size_of(hash) + size_of(key):], value, info.value_width)
return entry
}
// Will not expand table if capacity reached, user must do that check beforehand.
// Will override if hash exists
kt_byte_set :: proc()
{
}
kt_remove :: proc()
{
}
kt_byte_contains :: proc()
{
}
kt_byte_find_pointer :: proc()
{
}
kt_find :: proc()
{
}
kt_find_multiple :: proc()
{
}
kt_next_power_of_two :: #force_inline proc(x: int) -> int { power := 1; for ;x > power; do power += power; return power }

View File

@@ -5,11 +5,33 @@ Mega :: Kilo * 1024
Giga :: Mega * 1024
Tera :: Giga * 1024
// Provides the nearest prime number value for the given capacity
closest_prime :: proc(capacity: uint) -> uint
{
prime_table : []uint = {
53, 97, 193, 389, 769, 1543, 3079, 6151, 12289, 24593,
49157, 98317, 196613, 393241, 786433, 1572869, 3145739,
6291469, 12582917, 25165843, 50331653, 100663319,
201326611, 402653189, 805306457, 1610612741, 3221225473, 6442450941
};
for slot in prime_table {
if slot >= capacity {
return slot
}
}
return prime_table[len(prime_table) - 1]
}
raw_cursor :: #force_inline proc "contextless" (ptr: rawptr) -> [^]byte { return transmute([^]byte) ptr }
ptr_cursor :: #force_inline proc "contextless" (ptr: ^$Type) -> [^]Type { return transmute([^]Type) ptr }
@(require_results) is_power_of_two :: #force_inline proc "contextless" (x: uintptr) -> bool { return (x > 0) && ((x & (x-1)) == 0) }
@(require_results)
align_pow2_uint :: #force_inline proc "contextless" (ptr, align: uint) -> uint {
assert_contextless(is_power_of_two(uintptr(align)))
return ptr & ~(align-1)
}
@(require_results)
align_pow2 :: #force_inline proc "contextless" (ptr, align: int) -> int {
assert_contextless(is_power_of_two(uintptr(align)))
return ptr & ~(align-1)
@@ -51,8 +73,8 @@ slice_copy :: #force_inline proc "contextless" (dst, src: $SliceType / []$Type)
slice_fill :: #force_inline proc "contextless" (s: $SliceType / []$Type, value: Type) { memory_fill(cursor(s), value, len(s)) }
@(require_results) slice_to_bytes :: #force_inline proc "contextless" (s: []$Type) -> []byte { return ([^]byte)(raw_data(s))[:len(s) * size_of(Type)] }
@(require_results) slice_raw :: #force_inline proc "contextless" (s: []$Type) -> SliceRaw(Type) { return transmute(SliceRaw(Type)) s }
@(require_results) slice_to_bytes :: #force_inline proc "contextless" (s: []$Type) -> []byte { return ([^]byte)(raw_data(s))[:len(s) * size_of(Type)] }
@(require_results) slice_raw :: #force_inline proc "contextless" (s: []$Type) -> SliceRaw(Type) { return transmute(SliceRaw(Type)) s }
@(require_results) type_to_bytes :: #force_inline proc "contextless" (obj: ^$Type) -> []byte { return ([^]byte)(obj)[:size_of(Type)] }

View File

@@ -5,6 +5,8 @@
It only makes sure that memory allocations don't collide in the allocator and deallocations don't occur for memory never allocated.
I'm keeping it around as an artifact & for future allocators I may make.
NOTE(Ed): Perfer sanitizers
*/
package grime
@@ -17,7 +19,7 @@ MemoryTracker :: struct {
entries : Array(MemoryTrackerEntry),
}
Track_Memory :: true
Track_Memory :: false
@(disabled = Track_Memory == false)
memtracker_clear :: proc (tracker: MemoryTracker) {

View File

@@ -6,6 +6,7 @@ import "base:builtin"
import "base:intrinsics"
atomic_thread_fence :: intrinsics.atomic_thread_fence
mem_zero_volatile :: intrinsics.mem_zero_volatile
add_overflow :: intrinsics.overflow_add
// mem_zero :: intrinsics.mem_zero
// mem_copy :: intrinsics.mem_copy_non_overlapping
// mem_copy_overlapping :: intrinsics.mem_copy
@@ -140,7 +141,7 @@ copy :: proc {
mem_copy,
slice_copy,
}
copy_non_overlaping :: proc {
copy_non_overlapping :: proc {
mem_copy_non_overlapping,
slice_copy_overlapping,
}

View File

@@ -0,0 +1,30 @@
package grime
StrKey_U4 :: struct {
len: u32, // Length of string
offset: u32, // Offset in varena
}
StrKT_U4_Cell_Depth :: 4
StrKT_U4_Slot :: KTCX_Slot(StrKey_U4)
StrKT_U4_Cell :: KTCX_Cell(StrKT_U4_Slot, 4)
StrKT_U4_Table :: KTCX(StrKT_U4_Cell)
VStrKT_U4 :: struct {
varena: VArena, // Backed by growing vmem
kt: StrKT_U4_Table,
}
vstrkt_u4_init :: proc(varena: ^VArena, capacity: int, cache: ^VStrKT_U4)
{
capacity := cast(int) closest_prime(cast(uint) capacity)
ktcx_init(capacity, varena_allocator(varena), &cache.kt)
return
}
vstrkt_u4_intern :: proc(cache: ^VStrKT_U4) -> StrKey_U4
{
// profile(#procedure)
return {}
}

View File

@@ -1,4 +1,10 @@
package grime
// TODO(Ed): Review this
import "base:runtime"
// TODO(Ed): Support address sanitizer
/*
So this is a virtual memory backed arena allocator designed
to take advantage of one large contigous reserve of memory.
@@ -11,15 +17,259 @@ No other part of the program will directly touch the vitual memory interface dir
Thus for the scope of this prototype the Virtual Arena are the only interfaces to dynamic address spaces for the runtime of the client app.
The host application as well ideally (although this may not be the case for a while)
*/
VArena_GrowthPolicyProc :: #type proc( commit_used, committed, reserved, requested_size : uint ) -> uint
VArena :: struct {
using vmem: VirtualMemoryRegion,
tracker: MemoryTracker,
dbg_name: string,
commit_used: uint,
growth_policy: VArena_GrowthPolicyProc,
allow_any_resize: b32,
mutex: Mutex,
VArenaFlags :: bit_set[VArenaFlag; u32]
VArenaFlag :: enum u32 {
No_Large_Pages,
}
VArena :: struct {
using vmem: VirtualMemoryRegion,
commit_size: int,
commit_used: int,
flags: VArenaFlags,
}
// Default growth_policy is varena_default_growth_policy
varena_make :: proc(to_reserve, commit_size: int, base_address: uintptr, flags: VArenaFlags = {}
) -> (arena: ^VArena, alloc_error: AllocatorError)
{
page_size := virtual_get_page_size()
verify( page_size > size_of(VirtualMemoryRegion), "Make sure page size is not smaller than a VirtualMemoryRegion?")
verify( to_reserve >= page_size, "Attempted to reserve less than a page size" )
verify( commit_size >= page_size, "Attempted to commit less than a page size")
verify( to_reserve >= commit_size, "Attempted to commit more than there is to reserve" )
vmem : VirtualMemoryRegion
vmem, alloc_error = virtual_reserve_and_commit( base_address, uint(to_reserve), uint(commit_size) )
if ensure(vmem.base_address == nil || alloc_error != .None, "Failed to allocate requested virtual memory for virtual arena") {
return
}
arena = transmute(^VArena) vmem.base_address;
arena.vmem = vmem
arena.commit_used = align_pow2(size_of(arena), MEMORY_ALIGNMENT_DEFAULT)
arena.flags = flags
return
}
varena_alloc :: proc(self: ^VArena,
size: int,
alignment: int = MEMORY_ALIGNMENT_DEFAULT,
zero_memory := true,
location := #caller_location
) -> (data: []byte, alloc_error: AllocatorError)
{
verify( alignment & (alignment - 1) == 0, "Non-power of two alignment", location = location )
page_size := uint(virtual_get_page_size())
requested_size := uint(size)
if ensure(requested_size == 0, "Requested 0 size") do return nil, .Invalid_Argument
// ensure( requested_size > page_size, "Requested less than a page size, going to allocate a page size")
// requested_size = max(requested_size, page_size)
// TODO(Ed): Prevent multiple threads from entering here extrusively?
// sync.mutex_guard( & mutex )
commit_used := uint(self.commit_used)
reserved := uint(self.reserved)
commit_size := uint(self.commit_size)
alignment_offset := uint(0)
current_offset := uintptr(self.reserve_start) + uintptr(self.commit_used)
mask := uintptr(alignment - 1)
if (current_offset & mask != 0) do alignment_offset = uint(alignment) - uint(current_offset & mask)
size_to_allocate, overflow_signal := add_overflow( requested_size, alignment_offset )
if overflow_signal do return {}, .Out_Of_Memory
to_be_used : uint
to_be_used, overflow_signal = add_overflow( commit_used, size_to_allocate )
if (overflow_signal || to_be_used > reserved) do return {}, .Out_Of_Memory
header_offset := uint( uintptr(self.reserve_start) - uintptr(self.base_address) )
commit_left := self.committed - commit_used - header_offset
needs_more_committed := commit_left < size_to_allocate
if needs_more_committed {
profile("VArena Growing")
next_commit_size := max(to_be_used, commit_size)
alloc_error = virtual_commit( self.vmem, next_commit_size )
if alloc_error != .None do return
}
data_ptr := ([^]byte)(current_offset + uintptr(alignment_offset))
data = slice( data_ptr, requested_size )
commit_used += size_to_allocate
alloc_error = .None
// log_backing: [Kilobyte * 16]byte; backing_slice := log_backing[:]
// log( str_pfmt_buffer( backing_slice, "varena alloc - BASE: %p PTR: %X, SIZE: %d", cast(rawptr) self.base_address, & data[0], requested_size) )
if zero_memory {
// log( str_pfmt_buffer( backing_slice, "Zeroring data (Range: %p to %p)", raw_data(data), cast(rawptr) (uintptr(raw_data(data)) + uintptr(requested_size))))
// zero( data )
mem_zero( data_ptr, int(requested_size) )
}
return
}
varena_grow :: #force_inline proc(self: ^VArena, old_memory: []byte, requested_size: int, alignment: int = MEMORY_ALIGNMENT_DEFAULT, should_zero := true, loc := #caller_location
) -> (data: []byte, error: AllocatorError)
{
if ensure(old_memory == nil, "Growing without old_memory?") {
data, error = varena_alloc(self, requested_size, alignment, should_zero, loc)
return
}
if ensure(requested_size == len(old_memory), "Requested grow when none needed") {
data = old_memory
return
}
alignment_offset := uintptr(cursor(old_memory)) & uintptr(alignment - 1)
if ensure(alignment_offset == 0 && requested_size < len(old_memory), "Requested a shrink from varena_grow") {
data = old_memory
return
}
old_memory_offset := cursor(old_memory)[len(old_memory):]
current_offset := self.reserve_start[self.commit_used:]
when false {
if old_size < page_size {
// We're dealing with an allocation that requested less than the minimum allocated on vmem.
// Provide them more of their actual memory
data = slice(transmute([^]byte)old_memory, size )
return
}
}
verify( old_memory_offset == current_offset,
"Cannot grow existing allocation in vitual arena to a larger size unless it was the last allocated" )
if old_memory_offset != current_offset
{
// Give it new memory and copy the old over. Old memory is unrecoverable until clear.
new_region : []byte
new_region, error = varena_alloc( self, requested_size, alignment, should_zero, loc )
if ensure(new_region == nil || error != .None, "Failed to grab new region") {
data = old_memory
return
}
copy_non_overlapping( cursor(new_region), cursor(old_memory), len(old_memory) )
data = new_region
// log_print_fmt("varena resize (new): old: %p %v new: %p %v", old_memory, old_size, (& data[0]), size)
return
}
new_region : []byte
new_region, error = varena_alloc( self, requested_size - len(old_memory), alignment, should_zero, loc)
if ensure(new_region == nil || error != .None, "Failed to grab new region") {
data = old_memory
return
}
data = slice(cursor(old_memory), requested_size )
// log_print_fmt("varena resize (expanded): old: %p %v new: %p %v", old_memory, old_size, (& data[0]), size)
return
}
varena_shrink :: proc(self: ^VArena, memory: []byte, requested_size: int, loc := #caller_location) -> (data: []byte, error: AllocatorError) {
if requested_size == len(memory) { return memory, .None }
if ensure(memory == nil, "Shrinking without old_memory?") do return memory, .Invalid_Argument
current_offset := self.reserve_start[self.commit_used:]
shrink_amount := len(memory) - requested_size
if shrink_amount < 0 { return memory, .None }
assert(cursor(memory) == current_offset)
self.commit_used -= shrink_amount
return memory[:requested_size], .None
}
varena_reset :: #force_inline proc(self: ^VArena) {
// TODO(Ed): Prevent multiple threads from entering here extrusively?
// sync.mutex_guard( & mutex )
self.commit_used = 0
}
varena_release :: #force_inline proc(self: ^VArena) {
// TODO(Ed): Prevent multiple threads from entering here extrusively?
// sync.mutex_guard( & mutex )
virtual_release( self.vmem )
self.commit_used = 0
}
varena_rewind :: #force_inline proc(arena: ^VArena, save_point: AllocatorSP, loc := #caller_location) {
assert_contextless(save_point.type_sig == varena_allocator_proc)
assert_contextless(save_point.slot >= 0 && save_point.slot <= int(arena.commit_used))
arena.commit_used = save_point.slot
}
varena_save :: #force_inline proc(arena: ^VArena) -> AllocatorSP { return AllocatorSP { type_sig = varena_allocator_proc, slot = cast(int) arena.commit_used }}
varena_allocator_proc :: proc(input: AllocatorProc_In, output: ^AllocatorProc_Out) {
assert(output != nil)
assert(input.data != nil)
arena := transmute(^VArena) input.data
switch input.op {
case .Alloc, .Alloc_NoZero:
output.allocation, output.error = varena_alloc(arena, input.requested_size, input.alignment, input.op == .Alloc, input.loc)
return
case .Free:
output.error = .Mode_Not_Implemented
case .Reset:
varena_reset(arena)
case .Grow, .Grow_NoZero:
output.allocation, output.error = varena_grow(arena, input.old_allocation, input.requested_size, input.alignment, input.op == .Grow, input.loc)
case .Shrink:
output.allocation, output.error = varena_shrink(arena, input.old_allocation, input.requested_size)
case .Rewind:
varena_rewind(arena, input.save_point)
case .SavePoint:
output.save_point = varena_save(arena)
case .Query:
output.features = {.Alloc, .Reset, .Grow, .Shrink, .Rewind}
output.max_alloc = int(arena.reserved) - arena.commit_used
output.min_alloc = 0
output.left = output.max_alloc
output.save_point = varena_save(arena)
}
}
varena_odin_allocator_proc :: proc(
allocator_data : rawptr,
mode : Odin_AllocatorMode,
size : int,
alignment : int,
old_memory : rawptr,
old_size : int,
location : SourceCodeLocation = #caller_location
) -> (data: []byte, alloc_error: AllocatorError)
{
arena := transmute( ^VArena) allocator_data
page_size := uint(virtual_get_page_size())
switch mode {
case .Alloc, .Alloc_Non_Zeroed:
data, alloc_error = varena_alloc( arena, size, alignment, (mode == .Alloc), location )
return
case .Free:
alloc_error = .Mode_Not_Implemented
case .Free_All:
varena_reset( arena )
case .Resize, .Resize_Non_Zeroed:
if size > old_size do varena_grow (arena, slice(cursor(old_memory), old_size), size, alignment, (mode == .Alloc), location)
else do varena_shrink(arena, slice(cursor(old_memory), old_size), size, location)
case .Query_Features:
set := cast( ^Odin_AllocatorModeSet) old_memory
if set != nil do (set ^) = {.Alloc, .Alloc_Non_Zeroed, .Free_All, .Resize, .Query_Features}
case .Query_Info:
info := (^Odin_AllocatorQueryInfo)(old_memory)
info.pointer = transmute(rawptr) varena_save(arena).slot
info.size = cast(int) arena.reserved
info.alignment = MEMORY_ALIGNMENT_DEFAULT
return to_bytes(info), nil
}
return
}
varena_odin_allocator :: proc(arena: ^VArena) -> (allocator: Odin_Allocator) {
allocator.procedure = varena_odin_allocator_proc
allocator.data = arena
return
}
when ODIN_DEBUG {
varena_ainfo :: #force_inline proc "contextless" (arena: ^VArena) -> AllocatorInfo { return AllocatorInfo{proc_id = .VArena, data = arena} }
varena_allocator :: #force_inline proc "contextless" (arena: ^VArena) -> Odin_Allocator { return transmute(Odin_Allocator) AllocatorInfo{proc_id = .VArena, data = arena} }
}
else {
varena_ainfo :: #force_inline proc "contextless" (arena: ^VArena) -> AllocatorInfo { return AllocatorInfo{procedure = varena_allocator_proc, data = arena} }
varena_allocator :: #force_inline proc "contextless" (arena: ^VArena) -> Odin_Allocator { return transmute(Odin_Allocator) AllocatorInfo{procedure = varena_allocator_proc, data = arena} }
}
varena_push_item :: #force_inline proc(va: ^VArena, $Type: typeid, alignment: int = MEMORY_ALIGNMENT_DEFAULT, should_zero := true, location := #caller_location
) -> (^Type, AllocatorError) {
raw, error := varena_alloc(va, size_of(Type), alignment, should_zero, location)
return transmute(^Type) cursor(raw), error
}
varena_push_slice :: #force_inline proc(va: ^VArena, $Type: typeid, amount: int, alignment: int = MEMORY_ALIGNMENT_DEFAULT, should_zero := true, location := #caller_location
) -> ([]Type, AllocatorError) {
raw, error := varena_alloc(va, size_of(Type) * amount, alignment, should_zero, location)
return slice(transmute([^]Type) cursor(raw), len(raw) / size_of(Type)), error
}

View File

@@ -0,0 +1,126 @@
package grime
/*
Arena (Chained Virtual Areans):
*/
ArenaFlags :: bit_set[ArenaFlag; u32]
ArenaFlag :: enum u32 {
No_Large_Pages,
No_Chaining,
}
Arena :: struct {
backing: ^VArena,
prev: ^Arena,
current: ^Arena,
base_pos: int,
pos: int,
flags: ArenaFlags,
}
arena_make :: proc(reserve_size : int = Mega * 64, commit_size : int = Mega * 64, base_addr: uintptr = 0, flags: ArenaFlags = {}) -> ^Arena {
header_size := align_pow2(size_of(Arena), MEMORY_ALIGNMENT_DEFAULT)
current, error := varena_make(reserve_size, commit_size, base_addr, transmute(VArenaFlags) flags)
assert(error == .None)
assert(current != nil)
arena: ^Arena; arena, error = varena_push_item(current, Arena, 1)
assert(error == .None)
assert(arena != nil)
arena^ = Arena {
backing = current,
prev = nil,
current = arena,
base_pos = 0,
pos = header_size,
flags = flags,
}
return arena
}
arena_alloc :: proc(arena: ^Arena, size: int, alignment: int = MEMORY_ALIGNMENT_DEFAULT) -> []byte {
assert(arena != nil)
active := arena.current
size_requested := size
size_aligned := align_pow2(size_requested, alignment)
pos_pre := active.pos
pos_pst := pos_pre + size_aligned
reserved := int(active.backing.reserved)
should_chain := (.No_Chaining not_in arena.flags) && (reserved < pos_pst)
if should_chain {
new_arena := arena_make(reserved, active.backing.commit_size, 0, transmute(ArenaFlags) active.backing.flags)
new_arena.base_pos = active.base_pos + reserved
sll_stack_push_n(& arena.current, & new_arena, & new_arena.prev)
new_arena.prev = active
active = arena.current
}
result_ptr := transmute([^]byte) (uintptr(active) + uintptr(pos_pre))
vresult, error := varena_alloc(active.backing, size_aligned, alignment)
assert(error == .None)
slice_assert(vresult)
assert(raw_data(vresult) == result_ptr)
active.pos = pos_pst
return slice(result_ptr, size)
}
arena_release :: proc(arena: ^Arena) {
assert(arena != nil)
curr := arena.current
for curr != nil {
prev := curr.prev
varena_release(curr.backing)
curr = prev
}
}
arena_reset :: proc(arena: ^Arena) {
arena_rewind(arena, AllocatorSP { type_sig = arena_allocator_proc, slot = 0 })
}
arena_rewind :: proc(arena: ^Arena, save_point: AllocatorSP) {
assert(arena != nil)
assert(save_point.type_sig == arena_allocator_proc)
header_size := align_pow2(size_of(Arena), MEMORY_ALIGNMENT_DEFAULT)
curr := arena.current
big_pos := max(header_size, save_point.slot)
// Release arenas that are beyond the save point
for curr.base_pos >= big_pos {
prev := curr.prev
varena_release(curr.backing)
curr = prev
}
arena.current = curr
new_pos := big_pos - curr.base_pos
assert(new_pos <= curr.pos)
curr.pos = new_pos
varena_rewind(curr.backing, { type_sig = varena_allocator_proc, slot = curr.pos + size_of(VArena) })
}
arena_save :: #force_inline proc(arena: ^Arena) -> AllocatorSP { return { type_sig = arena_allocator_proc, slot = arena.base_pos + arena.current.pos } }
arena_allocator_proc :: proc(input: AllocatorProc_In, output: ^AllocatorProc_Out) {
panic("not implemented")
}
arena_odin_allocator_proc :: proc(
allocator_data : rawptr,
mode : Odin_AllocatorMode,
size : int,
alignment : int,
old_memory : rawptr,
old_size : int,
location : SourceCodeLocation = #caller_location
) -> (data: []byte, alloc_error: AllocatorError)
{
panic("not implemented")
}
when ODIN_DEBUG {
arena_ainfo :: #force_inline proc "contextless" (arena: ^Arena) -> AllocatorInfo { return AllocatorInfo{proc_id = .Arena, data = arena} }
arena_allocator :: #force_inline proc "contextless" (arena: ^Arena) -> Odin_Allocator { return transmute(Odin_Allocator) AllocatorInfo{proc_id = .Arena, data = arena} }
}
else {
arena_ainfo :: #force_inline proc "contextless" (arena: ^Arena) -> AllocatorInfo { return AllocatorInfo{procedure = arena_allocator_proc, data = arena} }
arena_allocator :: #force_inline proc "contextless" (arena: ^Arena) -> Odin_Allocator { return transmute(Odin_Allocator) AllocatorInfo{procedure = arena_allocator_proc, data = arena} }
}
arena_push_item :: proc()
{
}
arena_push_array :: proc()
{
}

View File

@@ -0,0 +1,28 @@
package grime
// TODO(Ed): Review this
import "base:runtime"
// TODO(Ed): Support address sanitizer
/*
Pool allocator backed by chained virtual arenas.
*/
Pool_FreeBlock :: struct { next: ^Pool_FreeBlock }
VPool :: struct {
arenas: ^Arena,
block_size: uint,
// alignment: uint,
free_list_head: ^Pool_FreeBlock,
}
pool_make :: proc() -> (pool: VPool, error: AllocatorError)
{
panic("not implemented")
// return
}

View File

@@ -0,0 +1,15 @@
package grime
VSlabSizeClass :: struct {
vmem_reserve: uint,
block_size: uint,
block_alignment: uint,
}
Slab_Max_Size_Classes :: 24
SlabPolicy :: FStack(VSlabSizeClass, Slab_Max_Size_Classes)
VSlab :: struct {
pools: FStack(VPool, Slab_Max_Size_Classes),
}

View File

@@ -23,14 +23,14 @@ load_client_api :: proc(version_id: int) -> (loaded_module: Client_API) {
file_copy_sync( Path_Sectr_Module, Path_Sectr_Live_Module, allocator = context.temp_allocator )
did_load: bool; lib, did_load = os_lib_load( Path_Sectr_Live_Module )
if ! did_load do panic( "Failed to load the sectr module.")
startup = cast( type_of( host_memory.client_api.startup)) os_lib_get_proc(lib, "startup")
shutdown = cast( type_of( host_memory.client_api.shutdown)) os_lib_get_proc(lib, "sectr_shutdown")
tick_lane_startup = cast( type_of( host_memory.client_api.tick_lane_startup)) os_lib_get_proc(lib, "tick_lane_startup")
job_worker_startup = cast( type_of( host_memory.client_api.job_worker_startup)) os_lib_get_proc(lib, "job_worker_startup")
hot_reload = cast( type_of( host_memory.client_api.hot_reload)) os_lib_get_proc(lib, "hot_reload")
tick_lane = cast( type_of( host_memory.client_api.tick_lane)) os_lib_get_proc(lib, "tick_lane")
clean_frame = cast( type_of( host_memory.client_api.clean_frame)) os_lib_get_proc(lib, "clean_frame")
jobsys_worker_tick = cast( type_of( host_memory.client_api.jobsys_worker_tick)) os_lib_get_proc(lib, "jobsys_worker_tick")
startup = transmute( type_of( host_memory.client_api.startup)) os_lib_get_proc(lib, "startup")
shutdown = transmute( type_of( host_memory.client_api.shutdown)) os_lib_get_proc(lib, "sectr_shutdown")
tick_lane_startup = transmute( type_of( host_memory.client_api.tick_lane_startup)) os_lib_get_proc(lib, "tick_lane_startup")
job_worker_startup = transmute( type_of( host_memory.client_api.job_worker_startup)) os_lib_get_proc(lib, "job_worker_startup")
hot_reload = transmute( type_of( host_memory.client_api.hot_reload)) os_lib_get_proc(lib, "hot_reload")
tick_lane = transmute( type_of( host_memory.client_api.tick_lane)) os_lib_get_proc(lib, "tick_lane")
clean_frame = transmute( type_of( host_memory.client_api.clean_frame)) os_lib_get_proc(lib, "clean_frame")
jobsys_worker_tick = transmute( type_of( host_memory.client_api.jobsys_worker_tick)) os_lib_get_proc(lib, "jobsys_worker_tick")
if startup == nil do panic("Failed to load sectr.startup symbol" )
if shutdown == nil do panic("Failed to load sectr.shutdown symbol" )
if tick_lane_startup == nil do panic("Failed to load sectr.tick_lane_startup symbol" )
@@ -151,6 +151,8 @@ main :: proc()
if thread_memory.id == .Master_Prepper {
thread_join_multiple(.. host_memory.threads[1:THREAD_TICK_LANES + THREAD_JOB_WORKERS])
}
host_memory.client_api.shutdown();
unload_client_api( & host_memory.client_api )

View File

@@ -83,6 +83,10 @@ import grime "codebase:grime"
grime_set_profiler_module_context :: grime.set_profiler_module_context
grime_set_profiler_thread_buffer :: grime.set_profiler_thread_buffer
ensure :: grime.ensure
fatal :: grime.fatal
verify :: grime.verify
file_is_locked :: grime.file_is_locked
logger_init :: grime.logger_init
to_odin_logger :: grime.to_odin_logger
@@ -137,24 +141,24 @@ import "codebase:sectr"
ThreadMemory :: sectr.ThreadMemory
WorkerID :: sectr.WorkerID
ensure :: #force_inline proc( condition : b32, msg : string, location := #caller_location ) {
if condition do return
log_print( msg, LoggerLevel.Warning, location )
debug_trap()
}
// TODO(Ed) : Setup exit codes!
fatal :: #force_inline proc( msg : string, exit_code : int = -1, location := #caller_location ) {
log_print( msg, LoggerLevel.Fatal, location )
debug_trap()
process_exit( exit_code )
}
// TODO(Ed) : Setup exit codes!
verify :: #force_inline proc( condition : b32, msg : string, exit_code : int = -1, location := #caller_location ) {
if condition do return
log_print( msg, LoggerLevel.Fatal, location )
debug_trap()
process_exit( exit_code )
}
// ensure :: #force_inline proc( condition : b32, msg : string, location := #caller_location ) {
// if condition do return
// log_print( msg, LoggerLevel.Warning, location )
// debug_trap()
// }
// // TODO(Ed) : Setup exit codes!
// fatal :: #force_inline proc( msg : string, exit_code : int = -1, location := #caller_location ) {
// log_print( msg, LoggerLevel.Fatal, location )
// debug_trap()
// process_exit( exit_code )
// }
// // TODO(Ed) : Setup exit codes!
// verify :: #force_inline proc( condition : b32, msg : string, exit_code : int = -1, location := #caller_location ) {
// if condition do return
// log_print( msg, LoggerLevel.Fatal, location )
// debug_trap()
// process_exit( exit_code )
// }
log_print :: proc( msg : string, level := LoggerLevel.Info, loc := #caller_location ) {
context.allocator = arena_allocator(& host_memory.host_scratch)

View File

@@ -100,7 +100,8 @@ startup :: proc(host_mem: ^ProcessMemory, thread_mem: ^ThreadMemory)
log_print_fmt("Startup time: %v ms", startup_ms)
}
// For some reason odin's symbols conflict with native foreign symbols...
// NOTE(Ed): For some reason odin's symbols conflict with native foreign symbols...
// Called in host.main after all tick lane or job worker threads have joined.
@export
sectr_shutdown :: proc()
{

View File

@@ -183,11 +183,9 @@ poll_input_events :: proc( input, prev_input : ^InputState, input_events : Input
for prev_key, id in prev_input.keyboard.keys {
input.keyboard.keys[id].ended_down = prev_key.ended_down
}
for prev_btn, id in prev_input.mouse.btns {
input.mouse.btns[id].ended_down = prev_btn.ended_down
}
input.mouse.raw_pos = prev_input.mouse.raw_pos
input.mouse.pos = prev_input.mouse.pos
@@ -200,7 +198,6 @@ poll_input_events :: proc( input, prev_input : ^InputState, input_events : Input
if events.num > 0 {
last_frame = peek_back( events).frame_id
}
// No new events, don't update
if last_frame == prev_frame do return
@@ -232,7 +229,6 @@ poll_input_events :: proc( input, prev_input : ^InputState, input_events : Input
}
}
}
Iterate_Mouse_Events:
{
iter_obj := iterator( & mouse_events ); iter := & iter_obj
@@ -241,17 +237,13 @@ poll_input_events :: proc( input, prev_input : ^InputState, input_events : Input
if last_frame > event.frame_id {
break
}
process_digital_btn :: proc( btn : ^DigitalBtn, prev_btn : DigitalBtn, ended_down : b32 )
{
first_transition := btn.half_transitions == 0
btn.half_transitions += 1
btn.ended_down = ended_down
}
// logf("mouse event: %v", event)
// log_print_fmt("mouse event: %v", event)
#partial switch event.type {
case .Mouse_Pressed:
btn := & input.mouse.btns[event.btn]
@@ -277,22 +269,18 @@ poll_input_events :: proc( input, prev_input : ^InputState, input_events : Input
input.mouse.delta = event.delta * { 1, -1 }
}
}
prev_frame = last_frame
}
input_event_iter :: #force_inline proc () -> FRingBufferIterator(InputEvent) {
return iterator_ringbuf_fixed( & memory.client_memory.input_events.events )
}
input_key_event_iter :: #force_inline proc() -> FRingBufferIterator(InputKeyEvent) {
return iterator_ringbuf_fixed( & memory.client_memory.input_events.key_events )
}
input_mouse_event_iter :: #force_inline proc() -> FRingBufferIterator(InputMouseEvent) {
return iterator_ringbuf_fixed( & memory.client_memory.input_events.mouse_events )
}
input_codes_pressed_slice :: #force_inline proc() -> []rune {
return to_slice( memory.client_memory.input_events.codes_pressed )
}

View File

@@ -61,6 +61,10 @@ import "core:time"
tick_now :: time.tick_now
import "codebase:grime"
ensure :: grime.ensure
fatal :: grime.fatal
verify :: grime.verify
Array :: grime.Array
array_to_slice :: grime.array_to_slice
array_append_array :: grime.array_append_array
@@ -117,24 +121,24 @@ Tera :: Giga * 1024
S_To_MS :: grime.S_To_MS
ensure :: #force_inline proc( condition : b32, msg : string, location := #caller_location ) {
if condition do return
log_print( msg, LoggerLevel.Warning, location )
debug_trap()
}
// TODO(Ed) : Setup exit codes!
fatal :: #force_inline proc( msg : string, exit_code : int = -1, location := #caller_location ) {
log_print( msg, LoggerLevel.Fatal, location )
debug_trap()
process_exit( exit_code )
}
// TODO(Ed) : Setup exit codes!
verify :: #force_inline proc( condition : b32, msg : string, exit_code : int = -1, location := #caller_location ) {
if condition do return
log_print( msg, LoggerLevel.Fatal, location )
debug_trap()
process_exit( exit_code )
}
// ensure :: #force_inline proc( condition : b32, msg : string, location := #caller_location ) {
// if condition do return
// log_print( msg, LoggerLevel.Warning, location )
// debug_trap()
// }
// // TODO(Ed) : Setup exit codes!
// fatal :: #force_inline proc( msg : string, exit_code : int = -1, location := #caller_location ) {
// log_print( msg, LoggerLevel.Fatal, location )
// debug_trap()
// process_exit( exit_code )
// }
// // TODO(Ed) : Setup exit codes!
// verify :: #force_inline proc( condition : b32, msg : string, exit_code : int = -1, location := #caller_location ) {
// if condition do return
// log_print( msg, LoggerLevel.Fatal, location )
// debug_trap()
// process_exit( exit_code )
// }
log_print :: proc( msg : string, level := LoggerLevel.Info, loc := #caller_location ) {
context.allocator = odin_arena_allocator(& memory.host_scratch)

View File

@@ -216,8 +216,8 @@ push-location $path_root
$build_args += $flag_microarch_zen5
$build_args += $flag_use_separate_modules
$build_args += $flag_thread_count + $CoreCount_Physical
$build_args += $flag_optimize_none
# $build_args += $flag_optimize_minimal
# $build_args += $flag_optimize_none
$build_args += $flag_optimize_minimal
# $build_args += $flag_optimize_speed
# $build_args += $falg_optimize_aggressive
$build_args += $flag_debug

View File

@@ -12,6 +12,8 @@ $url_odin_repo = 'https://github.com/Ed94/Odin.git'
$url_sokol = 'https://github.com/Ed94/sokol-odin.git'
$url_sokol_tools = 'https://github.com/floooh/sokol-tools-bin.git'
# TODO(Ed): https://github.com/karl-zylinski/odin-handle-map
$path_harfbuzz = join-path $path_thirdparty 'harfbuzz'
$path_ini_parser = join-path $path_thirdparty 'ini'
$path_odin = join-path $path_toolchain 'Odin'