Compare commits

...

11 Commits

Author SHA1 Message Date
Ed_
a0ddc3c26e minor misc (end of day stuff) 2025-10-21 23:21:07 -04:00
Ed_
2303866c81 code2/grime progress 2025-10-21 22:57:23 -04:00
Ed_
96c6d58ea0 Progress on code2/grime allocators 2025-10-21 22:10:48 -04:00
Ed_
f63b52f910 curate fixed stack 2025-10-21 22:10:23 -04:00
Ed_
6d5215ac1e Make ensures/verifies in Array asserts 2025-10-21 22:08:29 -04:00
Ed_
1e18592ff5 thinking about key tables... 2025-10-21 22:07:55 -04:00
Ed_
43141183a6 wip messing around with adding jai flavored hash/key table. 2025-10-20 12:51:29 -04:00
Ed_
0607d81f70 ignore .idea 2025-10-18 20:47:49 -04:00
Ed_
58ba273dd1 code2: initial curation of virtual arena 2025-10-18 20:46:06 -04:00
Ed_
0f621b4e1b Started to curate/move over input stuff 2025-10-18 15:01:30 -04:00
Ed_
62979b480e Code2 Progress: more sokol stuff 2025-10-18 15:01:19 -04:00
42 changed files with 2615 additions and 592 deletions

1
.gitignore vendored
View File

@@ -35,3 +35,4 @@ ols.json
*.spall
sectr.user
sectr.proj
.idea

View File

@@ -2,7 +2,10 @@
This prototype aims to flesh out ideas I've wanted to explore futher on code editing & related tooling.
The things to explore:
Current goal with the prototype is just making a good visualizer & note aggregation for codebases & libraries.
My note repos with affine links give an idea of what that would look like.
The things to explore (future):
* 2D canvas for laying out code visualized in various types of ASTs
* WYSIWYG frontend ASTs
@@ -28,55 +31,14 @@ The dependencies are:
* [sokol-odin (Sectr Fork)](https://github.com/Ed94/sokol-odin)
* [sokol-tools](https://github.com/floooh/sokol-tools)
* Powershell (if you want to use my build scripts)
* backtrace (not used yet)
* freetype (not used yet)
* Eventually some config parser (maybe I'll use metadesk, or [ini](https://github.com/laytan/odin-ini-parser))
The project is so far in a "codebase boostrapping" phase. Most the work being done right now is setting up high performance linear zoom rendering for text and UI.
Text has recently hit sufficient peformance targets, and now inital UX has become the focus.
The project's is organized into 2 runtime modules sectr_host & sectr.
The host module loads the main module & its memory. Hot-reloading it's dll when it detects a change.
Codebase organization:
* App: General app config, state, and operations.
* Engine: client interface for host, tick, update, rendering.
* Has the following definitions: startup, shutdown, reload, tick, clean_frame (which host hooks up to when managing the client dll)
* Will handle async ops.
* Font Provider: Manages fonts.
* Bulk of implementation maintained as a separate library: [VEFontCache-Odin](https://github.com/Ed94/VEFontCache-Odin)
* Grime: Name speaks for itself, stuff not directly related to the target features to iterate upon for the prototype.
* Defining dependency aliases or procedure overload tables, rolling own allocator, data structures, etc.
* Input: All human input related features
* Base input features (polling & related) are platform abstracted from sokol_app
* Entirely user rebindable
* Math: The usual for 2D/3D.
* Parsers:
* AST generation, editing, and serialization.
* Parsers for different levels of "synatitic & semantic awareness", Formatting -> Domain Specific AST
* Figure out pragmatic transformations between ASTs.
* Project: Encpasulation of user config/context/state separate from persistent app's
* Manages the codebase (database & model view controller)
* Manages workspaces : View compositions of the codebase
* UI: Core graphic user interface framework, AST visualzation & editing, backend visualization
* PIMGUI (Persistent Immediate Mode User Interface)
* Auto-layout
* Supports heavy procedural generation of box widgets
* Viewports
* Docking/Tiling, Floating, Canvas
Due to the nature of the prototype there are 'sub-groups' such as the codebase being its own ordeal as well as the workspace.
They'll be elaborated in their own documentation
## Gallery
![img](docs/assets/sectr_host_2024-03-09_04-30-27.png)
![img](docs/assets/sectr_host_2024-05-04_12-29-39.png)
![img](docs/assets/Code_2024-05-04_12-55-53.png)
![img](docs/assets/sectr_host_2024-05-11_22-34-15.png)
![img](docs/assets/sectr_host_2024-05-15_03-32-36.png)
![img](docs/assets/Code_2024-05-21_23-15-16.gif)
## Notes

View File

@@ -115,8 +115,8 @@ AllocatorInfo :: struct {
// Listing of every single allocator (used on hot-reloadable builds)
AllocatorProcID :: enum uintptr {
FArena,
// VArena,
// CArena,
VArena,
Arena,
// Pool,
// Slab,
// Odin_Arena,
@@ -127,8 +127,8 @@ resolve_allocator_proc :: #force_inline proc "contextless" (procedure: $Allocato
when ODIN_DEBUG {
switch (transmute(AllocatorProcID)procedure) {
case .FArena: return farena_allocator_proc
// case .VArena: return varena_allocaotr_proc
// case .CArena: return carena_allocator_proc
case .VArena: return varena_allocator_proc
case .Arena: return arena_allocator_proc
// case .Pool: return pool_allocator_proc
// case .Slab: return slab_allocator_proc
// case .Odin_Arena: return odin_arena_allocator_proc
@@ -145,8 +145,8 @@ resolve_odin_allocator :: #force_inline proc "contextless" (allocator: Odin_Allo
when ODIN_DEBUG {
switch (transmute(AllocatorProcID)allocator.procedure) {
case .FArena: return { farena_odin_allocator_proc, allocator.data }
// case .VArena: return { varena_odin_allocaotr_proc, allocator.data }
// case .CArena: return { carena_odin_allocator_proc, allocator.data }
case .VArena: return { varena_odin_allocator_proc, allocator.data }
case .Arena: return { arena_odin_allocator_proc, allocator.data }
// case .Pool: return nil // pool_allocator_proc
// case .Slab: return nil // slab_allocator_proc
// case .Odin_Arena: return nil // odin_arena_allocator_proc
@@ -157,7 +157,7 @@ resolve_odin_allocator :: #force_inline proc "contextless" (allocator: Odin_Allo
switch (allocator.procedure) {
case farena_allocator_proc: return { farena_odin_allocator_proc, allocator.data }
case varena_allocator_proc: return { varena_odin_allocator_proc, allocator.data }
case carena_allocator_proc: return { carena_odin_allocator_proc, allocator.data }
case arena_allocator_proc: return { arena_odin_allocator_proc, allocator.data }
}
}
panic_contextless("Unresolvable procedure")
@@ -177,6 +177,7 @@ odin_allocator_mode_to_allocator_op :: #force_inline proc "contextless" (mode: O
panic_contextless("Impossible path")
}
// TODO(Ed): Change to DEFAULT_ALIGNMENT
MEMORY_ALIGNMENT_DEFAULT :: 2 * size_of(rawptr)
allocatorinfo :: #force_inline proc(ainfo := context.allocator) -> AllocatorInfo { return transmute(AllocatorInfo) ainfo }
@@ -205,7 +206,7 @@ mem_save_point :: proc(ainfo := context.allocator, loc := #caller_location) -> A
resolve_allocator_proc(ainfo.procedure)({data = ainfo.data, op = .SavePoint, loc = loc}, & out)
return out.save_point
}
mem_alloc :: proc(size: int, alignment: int = MEMORY_ALIGNMENT_DEFAULT, no_zero: bool = false, ainfo : $Type = context.allocator, loc := #caller_location) -> ([]byte, AllocatorError) {
mem_alloc :: proc(size: int, alignment: int = MEMORY_ALIGNMENT_DEFAULT, no_zero: bool = false, ainfo: $Type = context.allocator, loc := #caller_location) -> ([]byte, AllocatorError) {
assert(ainfo.procedure != nil)
input := AllocatorProc_In {
data = ainfo.data,

View File

@@ -1,6 +1,6 @@
package grime
// Below should be defined per-package
// TODO(Ed): Below should be defined per-package?
ensure :: #force_inline proc(condition: bool, msg: string, location := #caller_location) -> bool {
if condition do return true

View File

@@ -128,7 +128,7 @@ array_append_value :: proc(self: ^Array($Type), value: Type) -> AllocatorError {
// Asumes non-overlapping for items.
array_append_at_slice :: proc(self : ^Array($Type ), items: []Type, id: int) -> AllocatorError {
ensure(id < self.num, "Why are we doing an append at beyond the bounds of the current element count")
assert(id < self.num, "Why are we doing an append at beyond the bounds of the current element count")
id := id
if id >= self.num { return array_append_slice(items) }
if len(items) > self.capacity {
@@ -143,7 +143,7 @@ array_append_at_slice :: proc(self : ^Array($Type ), items: []Type, id: int) ->
return AllocatorError.None
}
array_append_at_value :: proc(self: ^Array($Type), item: Type, id: int) -> AllocatorError {
ensure(id < self.num, "Why are we doing an append at beyond the bounds of the current element count")
assert(id < self.num, "Why are we doing an append at beyond the bounds of the current element count")
id := id; {
// TODO(Ed): Not sure I want this...
if id >= self.num do id = self.num
@@ -159,7 +159,7 @@ array_append_at_value :: proc(self: ^Array($Type), item: Type, id: int) -> Alloc
return AllocatorError.None
}
array_back :: #force_inline proc "contextless" (self : Array($Type)) -> Type { assert(self.num > 0); return self.data[self.num - 1] }
array_back :: #force_inline proc "contextless" (self : Array($Type)) -> Type { assert_contextless(self.num > 0); return self.data[self.num - 1] }
array_clear :: #force_inline proc "contextless" (self: Array($Type), zero_data: bool = false) {
if zero_data do zero(self.data, int(self.num) * size_of(Type))
@@ -167,8 +167,8 @@ array_clear :: #force_inline proc "contextless" (self: Array($Type), zero_data:
}
array_fill :: proc(self: Array($Type), begin, end: u64, value: Type) -> bool {
ensure(end - begin <= num)
ensure(end <= num)
assert(end - begin <= num)
assert(end <= num)
if (end - begin > num) || (end > num) do return false
mem_fill(data[begin:], value, end - begin)
return true
@@ -183,7 +183,7 @@ array_push_back :: #force_inline proc "contextless" (self: Array($Type)) -> bool
}
array_remove_at :: proc(self: Array($Type), id: int) {
verify( id < self.num, "Attempted to remove from an index larger than the array" )
assert( id < self.num, "Attempted to remove from an index larger than the array" )
mem_copy(self.data[id:], self.data[id + 1:], (self.num - id) * size_of(Type))
self.num -= 1
}

View File

@@ -1,7 +1,7 @@
package grime
// TODO(Ed): Review when os2 is done.
// TODO(Ed): Make an async option...
// TODO(Ed): Make an async option?
file_copy_sync :: proc( path_src, path_dst: string, allocator := context.allocator ) -> b32
{
file_size : i64

View File

@@ -0,0 +1,126 @@
package grime
FRingBuffer :: struct( $Type: typeid, $Size: u32 ) {
head : u32,
tail : u32,
num : u32,
items : [Size] Type,
}
ringbuf_fixed_cslear :: #force_inline proc "contextless" (ring: ^FRingBuffer($Type, $Size)) { ring.head = 0; ring.tail = 0; ring.num = 0 }
ringbuf_fixed_is_full :: #force_inline proc "contextless" (ring: FRingBuffer($Type, $Size)) -> bool { return ring.num == ring.Size }
ringbuf_fixed_is_empty :: #force_inline proc "contextless" (ring: FRingBuffer($Type, $Size)) -> bool { return ring.num == 0 }
ringbuf_fixed_peek_front_ref :: #force_inline proc "contextless" (using buffer: ^FRingBuffer($Type, $Size)) -> ^Type {
assert_contextless(num > 0, "Attempted to peek an empty ring buffer")
return & items[ head ]
}
ringbuf_fixed_peek_front :: #force_inline proc "contextless" ( using buffer : FRingBuffer( $Type, $Size)) -> Type {
assert_contextless(num > 0, "Attempted to peek an empty ring buffer")
return items[ head ]
}
ringbuf_fixed_peak_back :: #force_inline proc (using buffer : FRingBuffer( $Type, $Size)) -> Type {
assert_contextless(num > 0, "Attempted to peek an empty ring buffer")
buf_size := u32(Size)
index := (tail - 1 + buf_size) % buf_size
return items[ index ]
}
ringbuf_fixed_push :: #force_inline proc(using buffer: ^FRingBuffer($Type, $Size), value: Type) {
if num == Size do head = (head + 1) % Size
else do num += 1
items[ tail ] = value
tail = (tail + 1) % Size
}
ringbuf_fixed_push_slice :: proc "contextless" (buffer: ^FRingBuffer($Type, $Size), slice: []Type) -> u32
{
size := u32(Size)
slice_size := u32(len(slice))
assert_contextless( slice_size <= size, "Attempting to append a slice that is larger than the ring buffer!" )
if slice_size == 0 do return 0
items_to_add := min( slice_size, size)
items_added : u32 = 0
if items_to_add > Size - buffer.num {
// Some or all existing items will be overwritten
overwrite_count := items_to_add - (Size - buffer.num)
buffer.head = (buffer.head + overwrite_count) % size
buffer.num = size
}
else {
buffer.num += items_to_add
}
if items_to_add <= size {
// Case 1: Slice fits entirely or partially in the buffer
space_to_end := size - buffer.tail
first_chunk := min(items_to_add, space_to_end)
// First copy: from tail to end of buffer
copy( buffer.items[ buffer.tail: ] , slice[ :first_chunk ] )
if first_chunk < items_to_add {
// Second copy: wrap around to start of buffer
second_chunk := items_to_add - first_chunk
copy( buffer.items[:], slice[ first_chunk : items_to_add ] )
}
buffer.tail = (buffer.tail + items_to_add) % Size
items_added = items_to_add
}
else
{
// Case 2: Slice is larger than buffer, only keep last Size elements
to_add := slice[ slice_size - size: ]
// First copy: from start of buffer to end
first_chunk := min(Size, u32(len(to_add)))
copy( buffer.items[:], to_add[ :first_chunk ] )
if first_chunk < Size {
// Second copy: wrap around
copy( buffer.items[ first_chunk: ], to_add[ first_chunk: ] )
}
buffer.head = 0
buffer.tail = 0
buffer.num = Size
items_added = Size
}
return items_added
}
ringbuf_fixed_pop :: #force_inline proc "contextless" (using buffer: ^FRingBuffer($Type, $Size)) -> Type {
assert_contextless(num > 0, "Attempted to pop an empty ring buffer")
value := items[ head ]
head = ( head + 1 ) % Size
num -= 1
return value
}
FRingBufferIterator :: struct($Type : typeid) {
items : []Type,
head : u32,
tail : u32,
index : u32,
remaining : u32,
}
iterator_ringbuf_fixed :: proc "contextless" (buffer: ^FRingBuffer($Type, $Size)) -> FRingBufferIterator(Type)
{
iter := FRingBufferIterator(Type){
items = buffer.items[:],
head = buffer.head,
tail = buffer.tail,
remaining = buffer.num,
}
buff_size := u32(Size)
if buffer.num > 0 {
// Start from the last pushed item (one before tail)
iter.index = (buffer.tail - 1 + buff_size) % buff_size
} else {
iter.index = buffer.tail // This will not be used as remaining is 0
}
return iter
}
next_ringbuf_fixed_iterator :: proc(iter: ^FRingBufferIterator($Type)) -> ^Type {
using iter; if remaining == 0 do return nil // If there are no items left to iterate over
buf_size := cast(u32) len(items)
result := &items[index]
// Decrement index and wrap around if necessary
index = (index - 1 + buf_size) % buf_size
remaining -= 1
return result
}

View File

@@ -0,0 +1,29 @@
package grime
FStack :: struct ($Type: typeid, $Size: u32) {
items: [Size]Type,
idx: u32,
}
stack_clear :: #force_inline proc "contextless" (stack: ^FStack($Type, $Size)) { stack.idx = 0 }
stack_push :: #force_inline proc "contextless" (stack: ^FStack($Type, $Size ), value: Type) {
assert_contextless(stack.idx < u32(len( stack.items )), "Attempted to push on a full stack")
stack.items[stack.idx] = value
stack.idx += 1
}
stack_pop :: #force_inline proc "contextless" (stack: ^FStack($Type, $Size)) {
assert(stack.idx > 0, "Attempted to pop an empty stack")
stack.idx -= 1
if stack.idx == 0 {
stack.items[stack.idx] = {}
}
}
stack_peek_ref :: #force_inline proc "contextless" (s: ^FStack($Type, $Size)) -> (^Type) {
return & s.items[/*last_idx*/ max( 0, s.idx - 1 )]
}
stack_peek :: #force_inline proc "contextless" (s: ^FStack($Type, $Size)) -> Type {
return s.items[/*last_idx*/ max( 0, s.idx - 1 )]
}
stack_push_contextless :: #force_inline proc "contextless" (s: ^FStack($Type, $Size), value: Type) {
s.items[s.idx] = value
s.idx += 1
}

View File

@@ -1,9 +1,20 @@
package grime
hash32_djb8 :: #force_inline proc "contextless" ( hash : ^u32, bytes : []byte ) {
hash32_djb8 :: #force_inline proc "contextless" (hash: ^u32, bytes: []byte ) {
for value in bytes do (hash^) = (( (hash^) << 8) + (hash^) ) + u32(value)
}
hash64_djb8 :: #force_inline proc "contextless" ( hash : ^u64, bytes : []byte ) {
hash64_djb8 :: #force_inline proc "contextless" (hash: ^u64, bytes: []byte ) {
for value in bytes do (hash^) = (( (hash^) << 8) + (hash^) ) + u64(value)
}
// Ripped from core:hash, fnv32a
@(optimization_mode="favor_size")
hash32_fnv1a :: #force_inline proc "contextless" (hash: ^u32, data: []byte, seed := u32(0x811c9dc5)) {
hash^ = seed; for b in data { hash^ = (hash^ ~ u32(b)) * 0x01000193 }
}
// Ripped from core:hash, fnv64a
@(optimization_mode="favor_size")
hash64_fnv1a :: #force_inline proc "contextless" (hash: ^u64, data: []byte, seed := u64(0xcbf29ce484222325)) {
hash^ = seed; for b in data { hash^ = (hash^ ~ u64(b)) * 0x100000001b3 }
}

View File

@@ -1,164 +0,0 @@
package grime
import "base:intrinsics"
/*
Key Table 1-Layer Chained-Chunked-Cells
*/
KT1CX_Slot :: struct($type: typeid) {
value: type,
key: u64,
occupied: b32,
}
KT1CX_Cell :: struct($type: typeid, $depth: int) {
slots: [depth]KT1CX_Slot(type),
next: ^KT1CX_Cell(type, depth),
}
KT1CX :: struct($cell: typeid) {
table: []cell,
}
KT1CX_Byte_Slot :: struct {
key: u64,
occupied: b32,
}
KT1CX_Byte_Cell :: struct {
next: ^byte,
}
KT1CX_Byte :: struct {
table: []byte,
}
KT1CX_ByteMeta :: struct {
slot_size: int,
slot_key_offset: uintptr,
cell_next_offset: uintptr,
cell_depth: int,
cell_size: int,
type_width: int,
type: typeid,
}
KT1CX_InfoMeta :: struct {
table_size: int,
slot_size: int,
slot_key_offset: uintptr,
cell_next_offset: uintptr,
cell_depth: int,
cell_size: int,
type_width: int,
type: typeid,
}
KT1CX_Info :: struct {
backing_table: AllocatorInfo,
}
kt1cx_init :: proc(info: KT1CX_Info, m: KT1CX_InfoMeta, result: ^KT1CX_Byte) {
assert(result != nil)
assert(info.backing_table.procedure != nil)
assert(m.cell_depth > 0)
assert(m.table_size >= 4 * Kilo)
assert(m.type_width > 0)
table_raw, error := mem_alloc(m.table_size * m.cell_size, ainfo = allocator(info.backing_table))
assert(error == .None); slice_assert(transmute([]byte) table_raw)
(transmute(^SliceByte) & table_raw).len = m.table_size
result.table = table_raw
}
kt1cx_clear :: proc(kt: KT1CX_Byte, m: KT1CX_ByteMeta) {
cell_cursor := cursor(kt.table)
table_len := len(kt.table) * m.cell_size
for ; cell_cursor != end(kt.table); cell_cursor = cell_cursor[m.cell_size:] // for cell, cell_id in kt.table.cells
{
slots := SliceByte { cell_cursor, m.cell_depth * m.slot_size } // slots = cell.slots
slot_cursor := slots.data
for;; {
slot := slice(slot_cursor, m.slot_size) // slot = slots[slot_id]
zero(slot) // slot = {}
if slot_cursor == end(slots) { // if slot == end(slot)
next := slot_cursor[m.cell_next_offset:] // next = kt.table.cells[cell_id + 1]
if next != nil { // if next != nil
slots.data = next // slots = next.slots
slot_cursor = next
continue
}
}
slot_cursor = slot_cursor[m.slot_size:] // slot = slots[slot_id + 1]
}
}
}
kt1cx_slot_id :: proc(kt: KT1CX_Byte, key: u64, m: KT1CX_ByteMeta) -> u64 {
cell_size := m.cell_size // dummy value
hash_index := key % u64(len(kt.table))
return hash_index
}
kt1cx_get :: proc(kt: KT1CX_Byte, key: u64, m: KT1CX_ByteMeta) -> ^byte {
hash_index := kt1cx_slot_id(kt, key, m)
cell_offset := uintptr(hash_index) * uintptr(m.cell_size)
cell_cursor := cursor(kt.table)[cell_offset:] // cell_id = 0
{
slots := slice(cell_cursor, m.cell_depth * m.slot_size) // slots = cell[cell_id].slots
slot_cursor := cell_cursor // slot_id = 0
for;;
{
slot := transmute(^KT1CX_Byte_Slot) slot_cursor[m.slot_key_offset:] // slot = cell[slot_id]
if slot.occupied && slot.key == key {
return cast(^byte) slot_cursor
}
if slot_cursor == end(slots)
{
cell_next := cell_cursor[m.cell_next_offset:] // cell.next
if cell_next != nil {
slots = slice(cell_next, len(slots)) // slots = cell.next
slot_cursor = cell_next
cell_cursor = cell_next // cell = cell.next
continue
}
else {
return nil
}
}
slot_cursor = slot_cursor[m.slot_size:]
}
}
}
kt1cx_set :: proc(kt: KT1CX_Byte, key: u64, value: []byte, backing_cells: Odin_Allocator, m: KT1CX_ByteMeta) -> ^byte {
hash_index := kt1cx_slot_id(kt, key, m)
cell_offset := uintptr(hash_index) * uintptr(m.cell_size)
cell_cursor := cursor(kt.table)[cell_offset:] // KT1CX_Cell(Type) cell = kt.table[hash_index]
{
slots := SliceByte {cell_cursor, m.cell_depth * m.slot_size} // cell.slots
slot_cursor := slots.data
for ;;
{
slot := transmute(^KT1CX_Byte_Slot) slot_cursor[m.slot_key_offset:]
if slot.occupied == false {
slot.occupied = true
slot.key = key
return cast(^byte) slot_cursor
}
else if slot.key == key {
return cast(^byte) slot_cursor
}
if slot_cursor == end(slots) {
curr_cell := transmute(^KT1CX_Byte_Cell) (uintptr(cell_cursor) + m.cell_next_offset) // curr_cell = cell
if curr_cell != nil {
slots.data = curr_cell.next
slot_cursor = curr_cell.next
cell_cursor = curr_cell.next
continue
}
else {
new_cell, _ := mem_alloc(m.cell_size, ainfo = backing_cells)
curr_cell.next = raw_data(new_cell)
slot = transmute(^KT1CX_Byte_Slot) cursor(new_cell)[m.slot_key_offset:]
slot.occupied = true
slot.key = key
return raw_data(new_cell)
}
}
slot_cursor = slot_cursor[m.slot_size:]
}
return nil
}
}
kt1cx_assert :: proc(kt: $type / KT1CX) {
slice_assert(kt.table)
}
kt1cx_byte :: proc(kt: $type / KT1CX) -> KT1CX_Byte { return { slice( transmute([^]byte) cursor(kt.table), len(kt.table)) } }

View File

@@ -1,48 +0,0 @@
package grime
/*
Key Table 1-Layer Linear (KT1L)
*/
KT1L_Slot :: struct($Type: typeid) {
key: u64,
value: Type,
}
KT1L_Meta :: struct {
slot_size: uintptr,
kt_value_offset: uintptr,
type_width: uintptr,
type: typeid,
}
kt1l_populate_slice_a2_Slice_Byte :: proc(kt: ^[]byte, backing: AllocatorInfo, values: []byte, num_values: int, m: KT1L_Meta) {
assert(kt != nil)
if num_values == 0 { return }
table_size_bytes := num_values * int(m.slot_size)
kt^, _ = mem_alloc(table_size_bytes, ainfo = transmute(Odin_Allocator) backing)
slice_assert(kt ^)
kt_raw : SliceByte = transmute(SliceByte) kt^
for id in 0 ..< cast(uintptr) num_values {
slot_offset := id * m.slot_size // slot id
slot_cursor := kt_raw.data[slot_offset:] // slots[id] type: KT1L_<Type>
// slot_key := transmute(^u64) slot_cursor // slots[id].key type: U64
// slot_value := slice(slot_cursor[m.kt_value_offset:], m.type_width) // slots[id].value type: <Type>
a2_offset := id * m.type_width * 2 // a2 entry id
a2_cursor := cursor(values)[a2_offset:] // a2_entries[id] type: A2_<Type>
// a2_key := (transmute(^[]byte) a2_cursor) ^ // a2_entries[id].key type: <Type>
// a2_value := slice(a2_cursor[m.type_width:], m.type_width) // a2_entries[id].value type: <Type>
mem_copy_non_overlapping(slot_cursor[m.kt_value_offset:], a2_cursor[m.type_width:], cast(int) m.type_width) // slots[id].value = a2_entries[id].value
(transmute([^]u64) slot_cursor)[0] = 0;
hash64_djb8(transmute(^u64) slot_cursor, (transmute(^[]byte) a2_cursor) ^) // slots[id].key = hash64_djb8(a2_entries[id].key)
}
kt_raw.len = num_values
}
kt1l_populate_slice_a2 :: proc($Type: typeid, kt: ^[]KT1L_Slot(Type), backing: AllocatorInfo, values: [][2]Type) {
assert(kt != nil)
values_bytes := slice(transmute([^]u8) raw_data(values), len(values) * size_of([2]Type))
kt1l_populate_slice_a2_Slice_Byte(transmute(^[]byte) kt, backing, values_bytes, len(values), {
slot_size = size_of(KT1L_Slot(Type)),
kt_value_offset = offset_of(KT1L_Slot(Type), value),
type_width = size_of(Type),
type = Type,
})
}

View File

@@ -0,0 +1,196 @@
package grime
import "base:intrinsics"
/*
Key Table Chained-Chunked-Cells
Table has a cell with a user-specified depth. Each cell will be a linear search if the first slot is occupied.
Table allocated cells are looked up by hash.
If a cell is exhausted additional are allocated singly-chained reporting to the user when it does with a "cell_overflow" counter.
Slots track occupacy with a tombstone (occupied signal).
If the table ever needs to change its size, it should be a wipe and full traversal of the arena holding the values..
or maybe a wipe of that arena as it may no longer be accessible.
Has a likely-hood of having cache misses (based on reading other impls about these kind of tables).
Odin's hash-map or Jai's are designed with open-addressing and prevent that.
Intended to be wrapped in parent interface (such as a string cache). Keys are hashed by the table's user.
The table is not intended to directly store the type's value in it's slots (expects the slot value to be some sort of reference).
The value should be stored in an arena.
Could be upgraded two a X-layer, not sure if its ever viable.
Would essentially be segmenting the hash to address a multi-layered table lookup.
Where one table leads to another hash resolving id for a subtable with linear search of cells after.
*/
KTCX_Slot :: struct($type: typeid) {
value: type,
key: u64,
occupied: b32,
}
KTCX_Cell :: struct($type: typeid, $depth: int) {
slots: [depth]KTCX_Slot(type),
next: ^KTCX_Cell(type, depth),
}
KTCX :: struct($cell: typeid) {
table: []cell,
cell_overflow: int,
}
KTCX_Byte_Slot :: struct {
key: u64,
occupied: b32,
}
KTCX_Byte_Cell :: struct {
next: ^byte,
}
KTCX_Byte :: struct {
table: []byte,
cell_overflow: int,
}
KTCX_ByteMeta :: struct {
slot_size: int,
slot_key_offset: uintptr,
cell_next_offset: uintptr,
cell_depth: int,
cell_size: int,
type_width: int,
type: typeid,
}
KTCX_Info :: struct {
table_size: int,
slot_size: int,
slot_key_offset: uintptr,
cell_next_offset: uintptr,
cell_depth: int,
cell_size: int,
type_width: int,
type: typeid,
}
ktcx_byte :: #force_inline proc "contextless" (kt: $type / KTCX) -> KTCX_Byte { return { slice( transmute([^]byte) cursor(kt.table), len(kt.table)) } }
ktcx_init_byte :: proc(result: ^KTCX_Byte, tbl_backing: Odin_Allocator, m: KTCX_Info) {
assert(result != nil)
assert(tbl_backing.procedure != nil)
assert(m.cell_depth > 0)
assert(m.table_size >= 4 * Kilo)
assert(m.type_width > 0)
table_raw, error := mem_alloc(m.table_size * m.cell_size, ainfo = tbl_backing)
assert(error == .None); slice_assert(transmute([]byte) table_raw)
(transmute(^SliceByte) & table_raw).len = m.table_size
result.table = table_raw
}
ktcx_clear :: proc(kt: KTCX_Byte, m: KTCX_ByteMeta) {
cell_cursor := cursor(kt.table)
table_len := len(kt.table) * m.cell_size
for ; cell_cursor != end(kt.table); cell_cursor = cell_cursor[m.cell_size:] // for cell, cell_id in kt.table.cells
{
slots := SliceByte { cell_cursor, m.cell_depth * m.slot_size } // slots = cell.slots
slot_cursor := slots.data
for;; {
slot := slice(slot_cursor, m.slot_size) // slot = slots[slot_id]
zero(slot) // slot = {}
if slot_cursor == end(slots) { // if slot == end(slot)
next := slot_cursor[m.cell_next_offset:] // next = kt.table.cells[cell_id + 1]
if next != nil { // if next != nil
slots.data = next // slots = next.slots
slot_cursor = next
continue
}
}
slot_cursor = slot_cursor[m.slot_size:] // slot = slots[slot_id + 1]
}
}
}
ktcx_slot_id :: #force_inline proc "contextless" (table: []byte, key: u64) -> u64 {
return key % u64(len(table))
}
ktcx_get :: proc(kt: KTCX_Byte, key: u64, m: KTCX_ByteMeta) -> ^byte {
hash_index := key % u64(len(kt.table)) // ktcx_slot_id
cell_offset := uintptr(hash_index) * uintptr(m.cell_size)
cell_cursor := cursor(kt.table)[cell_offset:] // cell_id = 0
{
slots := slice(cell_cursor, m.cell_depth * m.slot_size) // slots = cell[cell_id].slots
slot_cursor := cell_cursor // slot_id = 0
for;;
{
slot := transmute(^KTCX_Byte_Slot) slot_cursor[m.slot_key_offset:] // slot = cell[slot_id]
if slot.occupied && slot.key == key {
return cast(^byte) slot_cursor
}
if slot_cursor == end(slots)
{
cell_next := cell_cursor[m.cell_next_offset:] // cell.next
if cell_next != nil {
slots = slice(cell_next, len(slots)) // slots = cell.next
slot_cursor = cell_next
cell_cursor = cell_next // cell = cell.next
continue
}
else {
return nil
}
}
slot_cursor = slot_cursor[m.slot_size:]
}
}
}
ktcx_set :: proc(kt: ^KTCX_Byte, key: u64, value: []byte, backing_cells: Odin_Allocator, m: KTCX_ByteMeta) -> ^byte {
hash_index := key % u64(len(kt.table)) // ktcx_slot_id
cell_offset := uintptr(hash_index) * uintptr(m.cell_size)
cell_cursor := cursor(kt.table)[cell_offset:] // KTCX_Cell(Type) cell = kt.table[hash_index]
{
slots := SliceByte {cell_cursor, m.cell_depth * m.slot_size} // cell.slots
slot_cursor := slots.data
for ;;
{
slot := transmute(^KTCX_Byte_Slot) slot_cursor[m.slot_key_offset:]
if slot.occupied == false {
slot.occupied = true
slot.key = key
return cast(^byte) slot_cursor
}
else if slot.key == key {
return cast(^byte) slot_cursor
}
if slot_cursor == end(slots) {
curr_cell := transmute(^KTCX_Byte_Cell) (uintptr(cell_cursor) + m.cell_next_offset) // curr_cell = cell
if curr_cell != nil {
slots.data = curr_cell.next
slot_cursor = curr_cell.next
cell_cursor = curr_cell.next
continue
}
else {
ensure(false, "Exhausted a cell. Increase the table size?")
new_cell, _ := mem_alloc(m.cell_size, ainfo = backing_cells)
curr_cell.next = raw_data(new_cell)
slot = transmute(^KTCX_Byte_Slot) cursor(new_cell)[m.slot_key_offset:]
slot.occupied = true
slot.key = key
kt.cell_overflow += 1
return raw_data(new_cell)
}
}
slot_cursor = slot_cursor[m.slot_size:]
}
return nil
}
}
// Type aware wrappers
ktcx_init :: #force_inline proc(table_size: int, tbl_backing: Odin_Allocator,
kt: ^$kt_type / KTCX(KTCX_Cell(KTCX_Slot($Type), $Depth))
){
ktcx_init_byte(transmute(^KTCX_Byte) kt, tbl_backing, {
table_size = table_size,
slot_size = size_of(KTCX_Slot(Type)),
slot_key_offset = offset_of(KTCX_Slot(Type), key),
cell_next_offset = offset_of(KTCX_Cell(Type, Depth), next),
cell_depth = Depth,
cell_size = size_of(KTCX_Cell(Type, Depth)),
type_width = size_of(Type),
type = Type,
})
}

View File

@@ -0,0 +1,37 @@
package grime
/*
Key Table 1-Layer Linear (KT1L)
Mainly intended for doing linear lookup of key-paried values. IE: Arg value parsing with label ids.
The table is built in one go from the key-value pairs. The default populate slice_a2 has the key and value as the same type.
*/
KTL_Slot :: struct($Type: typeid) {
key: u64,
value: Type,
}
KTL_Meta :: struct {
slot_size: int,
kt_value_offset: int,
type_width: int,
type: typeid,
}
ktl_get :: #force_inline proc "contextless" (kt: []KTL_Slot($Type), key: u64) -> ^Type {
for & slot in kt { if key == slot.key do return & slot.value; }
return nil
}
// Unique populator for key-value pair strings
ktl_populate_slice_a2_str :: #force_inline proc(kt: ^[]KTL_Slot(string), backing: Odin_Allocator, values: [][2]string) {
assert(kt != nil)
if len(values) == 0 { return }
raw_bytes, error := mem_alloc(size_of(KTL_Slot(string)) * len(values), ainfo = backing); assert(error == .None);
kt^ = slice( transmute([^]KTL_Slot(string)) cursor(raw_bytes), len(raw_bytes) / size_of(KTL_Slot(string)) )
for id in 0 ..< len(values) {
mem_copy_non_overlapping(& kt[id].value, & values[id][1], size_of(string))
hash64_fnv1a(& kt[id].key, transmute([]byte) values[id][0])
}
}

View File

@@ -0,0 +1,142 @@
package grime
/*
Hash Table based on John's Jai & Sean Barrett's
I don't like the table definition cntaining
the allocator, hash or compare procedure to be used.
So it has been stripped and instead applied on procedure site,
the parent container or is responsible for tracking that.
TODO(Ed): Resolve appropriate Key-Table term for it.
TODO(Ed): Complete this later if we actually want something beyond KT1CX or Odin's map.
*/
KT_Slot :: struct(
$TypeHash: typeid,
$TypeKey: typeid,
$TypeValue: typeid
) {
hash: TypeHash,
key: TypeKey,
value: TypeValue,
}
KT :: struct($KT_Slot: typeid) {
load_factor_perent: int,
count: int,
allocated: int,
slots_filled: int,
slots: []KT_Slot,
}
KT_Info :: struct {
key_width: int,
value_width: int,
slot_width: int,
}
KT_Opaque :: struct {
count: int,
allocated: int,
slots_filled: int,
slots: []byte,
}
KT_ByteMeta :: struct {
hash_width: int,
value_width: int,
}
KT_COUNT_COLLISIONS :: #config(KT_COUNT_COLLISIONS, false)
KT_HASH_NEVER_OCCUPIED :: 0
KT_HASH_REMOVED :: 1
KT_HASH_FIRST_VALID :: 2
KT_LOAD_FACTOR_PERCENT :: 70
kt_byte_init :: proc(info: KT_Info, tbl_allocator: Odin_Allocator, kt: ^KT_Opaque, $HashType: typeid)
{
#assert(size_of(HashType) >= 32)
assert(tbl_allocator.procedure != nil)
assert(info.value_width >= 32)
assert(info.slot_width >= 64)
}
kt_deinit :: proc(table: ^$KT / typeid, allocator: Odin_Allocator)
{
}
kt_walk_table_body_proc :: #type proc($TypeHash: typeid, hash: TypeHash, kt: ^KT_Opaque, info: KT_Info, id: TypeHash) -> (should_break: bool)
kt_walk_table :: proc($TypeHash: typeid, hash: TypeHash, kt: ^KT_Opaque, info: KT_Info, $walk_body: kt_walk_table_body_proc) -> (index: TypeHash)
{
mask := cast(TypeHash)(kt.allocated - 1) // Cast may truncate
if hash < KT_HASH_FIRST_VALID do hash += KT_HASH_FIRST_VALID
index : TypeHash = hash & mask
probe_increment: TypeHash = 1
for id := transmute(TypeHash) kt.slots[info.slot_width * index:]; id != 0;
{
if #force_inline walk_body(hash, kt, info, id) do break
index = (index + probe_increment) & mask
probe_increment += 1
}
}
// Will not expand table if capacity reached, user must do that check beforehand.
// Will return existing if hash found
kt_byte_add :: proc(value: [^]byte, key: [^]byte, hash: $TypeHash, kt: ^KT_Opaque, info: KT_Info)-> [^]byte
{
aasert(kt.slots_filled, kt.allocated)
index := #force_inline kt_walk_table(hash, kt, info,
proc(hash: $TypeHash, kt: ^KT_Opaque, info: KT_Info, id: TypeHash) -> (should_break: bool)
{
if id == KT_HASH_REMOVED {
kt.slots_filled -= 1
should_break = true
return
}
//TODO(Ed): Add collision tracking
return
})
kt.count += 1
kt.slots_filled += 1
slot_offset := info.slot_width * index
entry := table.slots[info.slot_width * index:]
mem_copy_non_overlapping(entry, hash, size_of(TypeHash))
mem_copy_non_overlapping(entry[size_of(hash):], key, info.key_width)
mem_copy_non_overlapping(entry[size_of(hash) + size_of(key):], value, info.value_width)
return entry
}
// Will not expand table if capacity reached, user must do that check beforehand.
// Will override if hash exists
kt_byte_set :: proc()
{
}
kt_remove :: proc()
{
}
kt_byte_contains :: proc()
{
}
kt_byte_find_pointer :: proc()
{
}
kt_find :: proc()
{
}
kt_find_multiple :: proc()
{
}
kt_next_power_of_two :: #force_inline proc(x: int) -> int { power := 1; for ;x > power; do power += power; return power }

View File

@@ -5,19 +5,41 @@ Mega :: Kilo * 1024
Giga :: Mega * 1024
Tera :: Giga * 1024
// Provides the nearest prime number value for the given capacity
closest_prime :: proc(capacity: uint) -> uint
{
prime_table : []uint = {
53, 97, 193, 389, 769, 1543, 3079, 6151, 12289, 24593,
49157, 98317, 196613, 393241, 786433, 1572869, 3145739,
6291469, 12582917, 25165843, 50331653, 100663319,
201326611, 402653189, 805306457, 1610612741, 3221225473, 6442450941
};
for slot in prime_table {
if slot >= capacity {
return slot
}
}
return prime_table[len(prime_table) - 1]
}
raw_cursor :: #force_inline proc "contextless" (ptr: rawptr) -> [^]byte { return transmute([^]byte) ptr }
ptr_cursor :: #force_inline proc "contextless" (ptr: ^$Type) -> [^]Type { return transmute([^]Type) ptr }
@(require_results) is_power_of_two :: #force_inline proc "contextless" (x: uintptr) -> bool { return (x > 0) && ((x & (x-1)) == 0) }
@(require_results)
align_pow2_uint :: #force_inline proc "contextless" (ptr, align: uint) -> uint {
assert_contextless(is_power_of_two(uintptr(align)))
return ptr & ~(align-1)
}
@(require_results)
align_pow2 :: #force_inline proc "contextless" (ptr, align: int) -> int {
assert_contextless(is_power_of_two(uintptr(align)))
return ptr & ~(align-1)
}
memory_zero_explicit :: #force_inline proc "contextless" (data: rawptr, len: int) -> rawptr {
mem_zero_volatile(data, len) // Use the volatile mem_zero
atomic_thread_fence(.Seq_Cst) // Prevent reordering
sync_mem_zero :: #force_inline proc "contextless" (data: rawptr, len: int) -> rawptr {
mem_zero_volatile(data, len) // Use the volatile mem_zero
sync_fence(.Seq_Cst) // Prevent reordering
return data
}
@@ -38,18 +60,21 @@ slice_assert :: #force_inline proc "contextless" (s: $SliceType / []$Type) {
slice_end :: #force_inline proc "contextless" (s : $SliceType / []$Type) -> ^Type { return cursor(s)[len(s):] }
slice_byte_end :: #force_inline proc "contextless" (s : SliceByte) -> ^byte { return s.data[s.len:] }
slice_zero :: #force_inline proc "contextless" (s: $SliceType / []$Type) {
assert_contextless(len(s) > 0)
mem_zero(raw_data(s), size_of(Type) * len(s))
}
slice_copy :: #force_inline proc "contextless" (dst, src: $SliceType / []$Type) -> int {
n := max(0, min(len(dst), len(src)))
if n > 0 {
mem_copy(raw_data(dst), raw_data(src), n * size_of(Type))
}
assert_contextless(n > 0)
mem_copy(raw_data(dst), raw_data(src), n * size_of(Type))
return n
}
slice_fill :: #force_inline proc "contextless" (s: $SliceType / []$Type, value: Type) { memory_fill(cursor(s), value, len(s)) }
@(require_results) slice_to_bytes :: #force_inline proc "contextless" (s: []$Type) -> []byte { return ([^]byte)(raw_data(s))[:len(s) * size_of(Type)] }
@(require_results) slice_raw :: #force_inline proc "contextless" (s: []$Type) -> SliceRaw(Type) { return transmute(SliceRaw(Type)) s }
@(require_results) slice_to_bytes :: #force_inline proc "contextless" (s: []$Type) -> []byte { return ([^]byte)(raw_data(s))[:len(s) * size_of(Type)] }
@(require_results) slice_raw :: #force_inline proc "contextless" (s: []$Type) -> SliceRaw(Type) { return transmute(SliceRaw(Type)) s }
@(require_results) type_to_bytes :: #force_inline proc "contextless" (obj: ^$Type) -> []byte { return ([^]byte)(obj)[:size_of(Type)] }
@@ -84,37 +109,33 @@ calc_padding_with_header :: proc "contextless" (pointer: uintptr, alignment: uin
}
// Helper to get the the beginning of memory after a slice
memory_after :: #force_inline proc "contextless" ( s: []byte ) -> ( ^ byte) {
@(require_results)
memory_after :: #force_inline proc "contextless" (s: []byte ) -> (^byte) {
return cursor(s)[len(s):]
}
memory_after_header :: #force_inline proc "contextless" ( header : ^($ Type) ) -> ( [^]byte) {
memory_after_header :: #force_inline proc "contextless" (header: ^($Type)) -> ([^]byte) {
result := cast( [^]byte) ptr_offset( header, 1 )
// result := cast( [^]byte) (cast( [^]Type) header)[ 1:]
return result
}
@(require_results)
memory_align_formula :: #force_inline proc "contextless" ( size, align : uint) -> uint {
memory_align_formula :: #force_inline proc "contextless" (size, align: uint) -> uint {
result := size + align - 1
return result - result % align
}
// This is here just for docs
memory_misalignment :: #force_inline proc ( address, alignment : uintptr) -> uint {
memory_misalignment :: #force_inline proc "contextless" (address, alignment: uintptr) -> uint {
// address % alignment
assert(is_power_of_two(alignment))
assert_contextless(is_power_of_two(alignment))
return uint( address & (alignment - 1) )
}
// This is here just for docs
@(require_results)
memory_aign_forward :: #force_inline proc( address, alignment : uintptr) -> uintptr
memory_aign_forward :: #force_inline proc "contextless" (address, alignment : uintptr) -> uintptr
{
assert(is_power_of_two(alignment))
assert_contextless(is_power_of_two(alignment))
aligned_address := address
misalignment := cast(uintptr) memory_misalignment( address, alignment )
misalignment := transmute(uintptr) memory_misalignment( address, alignment )
if misalignment != 0 {
aligned_address += alignment - misalignment
}

View File

@@ -5,6 +5,8 @@
It only makes sure that memory allocations don't collide in the allocator and deallocations don't occur for memory never allocated.
I'm keeping it around as an artifact & for future allocators I may make.
NOTE(Ed): Perfer sanitizers
*/
package grime
@@ -17,7 +19,7 @@ MemoryTracker :: struct {
entries : Array(MemoryTrackerEntry),
}
Track_Memory :: true
Track_Memory :: false
@(disabled = Track_Memory == false)
memtracker_clear :: proc (tracker: MemoryTracker) {

View File

@@ -6,6 +6,7 @@ import "base:builtin"
import "base:intrinsics"
atomic_thread_fence :: intrinsics.atomic_thread_fence
mem_zero_volatile :: intrinsics.mem_zero_volatile
add_overflow :: intrinsics.overflow_add
// mem_zero :: intrinsics.mem_zero
// mem_copy :: intrinsics.mem_copy_non_overlapping
// mem_copy_overlapping :: intrinsics.mem_copy
@@ -80,7 +81,7 @@ import "core:os"
file_truncate :: os.truncate
file_write :: os.write
file_read_entire_from_filename :: #force_inline proc(name: string, allocator := context.allocator, loc := #caller_location) -> (data: []byte, success: bool) { return os.read_entire_file_from_filename(name, resolve_odin_allocator(allocator), loc) }
file_read_entire_from_filename :: #force_inline proc(name: string, allocator := context.allocator, loc := #caller_location) -> ([]byte, bool) { return os.read_entire_file_from_filename(name, resolve_odin_allocator(allocator), loc) }
file_write_entire :: os.write_entire_file
file_read_entire :: proc {
@@ -91,15 +92,13 @@ import "core:strings"
StrBuilder :: strings.Builder
strbuilder_from_bytes :: strings.builder_from_bytes
import "core:slice"
slice_zero :: slice.zero
import "core:prof/spall"
Spall_Context :: spall.Context
Spall_Buffer :: spall.Buffer
import "core:sync"
Mutex :: sync.Mutex
sync_fence :: sync.atomic_thread_fence
sync_load :: sync.atomic_load_explicit
sync_store :: sync.atomic_store_explicit
@@ -122,54 +121,50 @@ array_append :: proc {
array_append_array,
array_append_slice,
}
array_append_at :: proc {
// array_append_at_array,
array_append_at_slice,
array_append_at_value,
}
cursor :: proc {
raw_cursor,
ptr_cursor,
slice_cursor,
string_cursor,
}
end :: proc {
slice_end,
slice_byte_end,
string_end,
}
copy :: proc {
mem_copy,
slice_copy,
}
copy_non_overlaping :: proc {
copy_non_overlapping :: proc {
mem_copy_non_overlapping,
slice_copy_overlapping,
}
fill :: proc {
mem_fill,
slice_fill,
}
iterator :: proc {
iterator_ringbuf_fixed,
}
make :: proc {
array_init,
}
peek_back :: proc {
ringbuf_fixed_peak_back,
}
to_bytes :: proc {
slice_to_bytes,
type_to_bytes,
}
to_string :: proc {
strings.to_string,
}
zero :: proc {
mem_zero,
slice_zero,

View File

@@ -1,168 +0,0 @@
package grime
RingBufferFixed :: struct( $Type: typeid, $Size: u32 ) {
head : u32,
tail : u32,
num : u32,
items : [Size] Type,
}
ringbuf_fixed_clear :: #force_inline proc "contextless" ( using buffer : ^RingBufferFixed( $Type, $Size)) {
head = 0
tail = 0
num = 0
}
ringbuf_fixed_is_full :: #force_inline proc "contextless" ( using buffer : RingBufferFixed( $Type, $Size)) -> bool {
return num == Size
}
ringbuf_fixed_is_empty :: #force_inline proc "contextless" ( using buffer : RingBufferFixed( $Type, $Size)) -> bool {
return num == 0
}
ringbuf_fixed_peek_front_ref :: #force_inline proc "contextless" ( using buffer : ^RingBufferFixed( $Type, $Size)) -> ^Type {
assert(num > 0, "Attempted to peek an empty ring buffer")
return & items[ head ]
}
ringbuf_fixed_peek_front :: #force_inline proc "contextless" ( using buffer : RingBufferFixed( $Type, $Size)) -> Type {
assert(num > 0, "Attempted to peek an empty ring buffer")
return items[ head ]
}
ringbuf_fixed_peak_back :: #force_inline proc ( using buffer : RingBufferFixed( $Type, $Size)) -> Type {
assert(num > 0, "Attempted to peek an empty ring buffer")
buf_size := u32(Size)
index := (tail - 1 + buf_size) % buf_size
return items[ index ]
}
ringbuf_fixed_push :: #force_inline proc(using buffer: ^RingBufferFixed($Type, $Size), value: Type) {
if num == Size do head = (head + 1) % Size
else do num += 1
items[ tail ] = value
tail = (tail + 1) % Size
}
ringbuf_fixed_push_slice :: proc(buffer: ^RingBufferFixed($Type, $Size), slice: []Type) -> u32
{
size := u32(Size)
slice_size := u32(len(slice))
// assert( slice_size <= size, "Attempting to append a slice that is larger than the ring buffer!" )
if slice_size == 0 do return 0
items_to_add := min( slice_size, size)
items_added : u32 = 0
if items_to_add > Size - buffer.num
{
// Some or all existing items will be overwritten
overwrite_count := items_to_add - (Size - buffer.num)
buffer.head = (buffer.head + overwrite_count) % size
buffer.num = size
}
else
{
buffer.num += items_to_add
}
if items_to_add <= size
{
// Case 1: Slice fits entirely or partially in the buffer
space_to_end := size - buffer.tail
first_chunk := min(items_to_add, space_to_end)
// First copy: from tail to end of buffer
copy( buffer.items[ buffer.tail: ] , slice[ :first_chunk ] )
if first_chunk < items_to_add {
// Second copy: wrap around to start of buffer
second_chunk := items_to_add - first_chunk
copy( buffer.items[:], slice[ first_chunk : items_to_add ] )
}
buffer.tail = (buffer.tail + items_to_add) % Size
items_added = items_to_add
}
else
{
// Case 2: Slice is larger than buffer, only keep last Size elements
to_add := slice[ slice_size - size: ]
// First copy: from start of buffer to end
first_chunk := min(Size, u32(len(to_add)))
copy( buffer.items[:], to_add[ :first_chunk ] )
if first_chunk < Size
{
// Second copy: wrap around
copy( buffer.items[ first_chunk: ], to_add[ first_chunk: ] )
}
buffer.head = 0
buffer.tail = 0
buffer.num = Size
items_added = Size
}
return items_added
}
ringbuf_fixed_pop :: #force_inline proc "contextless" ( using buffer : ^RingBufferFixed( $Type, $Size )) -> Type {
assert(num > 0, "Attempted to pop an empty ring buffer")
value := items[ head ]
head = ( head + 1 ) % Size
num -= 1
return value
}
RingBufferFixedIterator :: struct( $Type : typeid) {
items : []Type,
head : u32,
tail : u32,
index : u32,
remaining : u32,
}
iterator_ringbuf_fixed :: proc(buffer: ^RingBufferFixed($Type, $Size)) -> RingBufferFixedIterator(Type)
{
iter := RingBufferFixedIterator(Type){
items = buffer.items[:],
head = buffer.head,
tail = buffer.tail,
remaining = buffer.num,
}
buff_size := u32(Size)
if buffer.num > 0 {
// Start from the last pushed item (one before tail)
iter.index = (buffer.tail - 1 + buff_size) % buff_size
} else {
iter.index = buffer.tail // This will not be used as remaining is 0
}
return iter
}
next_ringbuf_fixed_iterator :: proc(iter : ^RingBufferFixedIterator( $Type)) -> ^Type
{
using iter
if remaining == 0 {
return nil // If there are no items left to iterate over
}
buf_size := cast(u32) len(items)
result := &items[index]
// Decrement index and wrap around if necessary
index = (index - 1 + buf_size) % buf_size
remaining -= 1
return result
}

View File

@@ -1,9 +1,7 @@
package grime
//region STATIC MEMORY
grime_memory: StaticMemory
@thread_local grime_thread: ThreadMemory
//endregion STATIC MEMORY
@(private) grime_memory: StaticMemory
@(private, thread_local) grime_thread: ThreadMemory
StaticMemory :: struct {
spall_context: ^Spall_Context,

View File

@@ -8,3 +8,13 @@ string_cursor :: #force_inline proc "contextless" (s: string) -> [^]u8 { return
string_copy :: #force_inline proc "contextless" (dst, src: string) { slice_copy (transmute([]byte) dst, transmute([]byte) src) }
string_end :: #force_inline proc "contextless" (s: string) -> ^u8 { return slice_end (transmute([]byte) s) }
string_assert :: #force_inline proc "contextless" (s: string) { slice_assert(transmute([]byte) s) }
str_to_cstr_capped :: proc(content: string, mem: []byte) -> cstring {
copy_len := min(len(content), len(mem) - 1)
if copy_len > 0 do copy(mem[:copy_len], transmute([]byte) content)
mem[copy_len] = 0
return transmute(cstring) raw_data(mem)
}
cstr_len_capped :: #force_inline proc "contextless" (content: cstring, cap: int) -> (len: int) { for len = 0; (len <= cap) && (transmute([^]byte)content)[len] != 0; len += 1 {} return }
cstr_to_str_capped :: #force_inline proc "contextless" (content: cstring, mem: []byte) -> string { return transmute(string) Raw_String { cursor(mem), cstr_len_capped (content, len(mem)) } }

View File

@@ -0,0 +1,30 @@
package grime
StrKey_U4 :: struct {
len: u32, // Length of string
offset: u32, // Offset in varena
}
StrKT_U4_Cell_Depth :: 4
StrKT_U4_Slot :: KTCX_Slot(StrKey_U4)
StrKT_U4_Cell :: KTCX_Cell(StrKT_U4_Slot, 4)
StrKT_U4_Table :: KTCX(StrKT_U4_Cell)
VStrKT_U4 :: struct {
varena: VArena, // Backed by growing vmem
kt: StrKT_U4_Table,
}
vstrkt_u4_init :: proc(varena: ^VArena, capacity: int, cache: ^VStrKT_U4)
{
capacity := cast(int) closest_prime(cast(uint) capacity)
ktcx_init(capacity, varena_allocator(varena), &cache.kt)
return
}
vstrkt_u4_intern :: proc(cache: ^VStrKT_U4) -> StrKey_U4
{
// profile(#procedure)
return {}
}

View File

@@ -1,4 +1,10 @@
package grime
// TODO(Ed): Review this
import "base:runtime"
// TODO(Ed): Support address sanitizer
/*
So this is a virtual memory backed arena allocator designed
to take advantage of one large contigous reserve of memory.
@@ -11,15 +17,259 @@ No other part of the program will directly touch the vitual memory interface dir
Thus for the scope of this prototype the Virtual Arena are the only interfaces to dynamic address spaces for the runtime of the client app.
The host application as well ideally (although this may not be the case for a while)
*/
VArena_GrowthPolicyProc :: #type proc( commit_used, committed, reserved, requested_size : uint ) -> uint
VArena :: struct {
using vmem: VirtualMemoryRegion,
tracker: MemoryTracker,
dbg_name: string,
commit_used: uint,
growth_policy: VArena_GrowthPolicyProc,
allow_any_resize: b32,
mutex: Mutex,
VArenaFlags :: bit_set[VArenaFlag; u32]
VArenaFlag :: enum u32 {
No_Large_Pages,
}
VArena :: struct {
using vmem: VirtualMemoryRegion,
commit_size: int,
commit_used: int,
flags: VArenaFlags,
}
// Default growth_policy is varena_default_growth_policy
varena_make :: proc(to_reserve, commit_size: int, base_address: uintptr, flags: VArenaFlags = {}
) -> (arena: ^VArena, alloc_error: AllocatorError)
{
page_size := virtual_get_page_size()
verify( page_size > size_of(VirtualMemoryRegion), "Make sure page size is not smaller than a VirtualMemoryRegion?")
verify( to_reserve >= page_size, "Attempted to reserve less than a page size" )
verify( commit_size >= page_size, "Attempted to commit less than a page size")
verify( to_reserve >= commit_size, "Attempted to commit more than there is to reserve" )
vmem : VirtualMemoryRegion
vmem, alloc_error = virtual_reserve_and_commit( base_address, uint(to_reserve), uint(commit_size) )
if ensure(vmem.base_address == nil || alloc_error != .None, "Failed to allocate requested virtual memory for virtual arena") {
return
}
arena = transmute(^VArena) vmem.base_address;
arena.vmem = vmem
arena.commit_used = align_pow2(size_of(arena), MEMORY_ALIGNMENT_DEFAULT)
arena.flags = flags
return
}
varena_alloc :: proc(self: ^VArena,
size: int,
alignment: int = MEMORY_ALIGNMENT_DEFAULT,
zero_memory := true,
location := #caller_location
) -> (data: []byte, alloc_error: AllocatorError)
{
verify( alignment & (alignment - 1) == 0, "Non-power of two alignment", location = location )
page_size := uint(virtual_get_page_size())
requested_size := uint(size)
if ensure(requested_size == 0, "Requested 0 size") do return nil, .Invalid_Argument
// ensure( requested_size > page_size, "Requested less than a page size, going to allocate a page size")
// requested_size = max(requested_size, page_size)
// TODO(Ed): Prevent multiple threads from entering here extrusively?
// sync.mutex_guard( & mutex )
commit_used := uint(self.commit_used)
reserved := uint(self.reserved)
commit_size := uint(self.commit_size)
alignment_offset := uint(0)
current_offset := uintptr(self.reserve_start) + uintptr(self.commit_used)
mask := uintptr(alignment - 1)
if (current_offset & mask != 0) do alignment_offset = uint(alignment) - uint(current_offset & mask)
size_to_allocate, overflow_signal := add_overflow( requested_size, alignment_offset )
if overflow_signal do return {}, .Out_Of_Memory
to_be_used : uint
to_be_used, overflow_signal = add_overflow( commit_used, size_to_allocate )
if (overflow_signal || to_be_used > reserved) do return {}, .Out_Of_Memory
header_offset := uint( uintptr(self.reserve_start) - uintptr(self.base_address) )
commit_left := self.committed - commit_used - header_offset
needs_more_committed := commit_left < size_to_allocate
if needs_more_committed {
profile("VArena Growing")
next_commit_size := max(to_be_used, commit_size)
alloc_error = virtual_commit( self.vmem, next_commit_size )
if alloc_error != .None do return
}
data_ptr := ([^]byte)(current_offset + uintptr(alignment_offset))
data = slice( data_ptr, requested_size )
commit_used += size_to_allocate
alloc_error = .None
// log_backing: [Kilobyte * 16]byte; backing_slice := log_backing[:]
// log( str_pfmt_buffer( backing_slice, "varena alloc - BASE: %p PTR: %X, SIZE: %d", cast(rawptr) self.base_address, & data[0], requested_size) )
if zero_memory {
// log( str_pfmt_buffer( backing_slice, "Zeroring data (Range: %p to %p)", raw_data(data), cast(rawptr) (uintptr(raw_data(data)) + uintptr(requested_size))))
// zero( data )
mem_zero( data_ptr, int(requested_size) )
}
return
}
varena_grow :: #force_inline proc(self: ^VArena, old_memory: []byte, requested_size: int, alignment: int = MEMORY_ALIGNMENT_DEFAULT, should_zero := true, loc := #caller_location
) -> (data: []byte, error: AllocatorError)
{
if ensure(old_memory == nil, "Growing without old_memory?") {
data, error = varena_alloc(self, requested_size, alignment, should_zero, loc)
return
}
if ensure(requested_size == len(old_memory), "Requested grow when none needed") {
data = old_memory
return
}
alignment_offset := uintptr(cursor(old_memory)) & uintptr(alignment - 1)
if ensure(alignment_offset == 0 && requested_size < len(old_memory), "Requested a shrink from varena_grow") {
data = old_memory
return
}
old_memory_offset := cursor(old_memory)[len(old_memory):]
current_offset := self.reserve_start[self.commit_used:]
when false {
if old_size < page_size {
// We're dealing with an allocation that requested less than the minimum allocated on vmem.
// Provide them more of their actual memory
data = slice(transmute([^]byte)old_memory, size )
return
}
}
verify( old_memory_offset == current_offset,
"Cannot grow existing allocation in vitual arena to a larger size unless it was the last allocated" )
if old_memory_offset != current_offset
{
// Give it new memory and copy the old over. Old memory is unrecoverable until clear.
new_region : []byte
new_region, error = varena_alloc( self, requested_size, alignment, should_zero, loc )
if ensure(new_region == nil || error != .None, "Failed to grab new region") {
data = old_memory
return
}
copy_non_overlapping( cursor(new_region), cursor(old_memory), len(old_memory) )
data = new_region
// log_print_fmt("varena resize (new): old: %p %v new: %p %v", old_memory, old_size, (& data[0]), size)
return
}
new_region : []byte
new_region, error = varena_alloc( self, requested_size - len(old_memory), alignment, should_zero, loc)
if ensure(new_region == nil || error != .None, "Failed to grab new region") {
data = old_memory
return
}
data = slice(cursor(old_memory), requested_size )
// log_print_fmt("varena resize (expanded): old: %p %v new: %p %v", old_memory, old_size, (& data[0]), size)
return
}
varena_shrink :: proc(self: ^VArena, memory: []byte, requested_size: int, loc := #caller_location) -> (data: []byte, error: AllocatorError) {
if requested_size == len(memory) { return memory, .None }
if ensure(memory == nil, "Shrinking without old_memory?") do return memory, .Invalid_Argument
current_offset := self.reserve_start[self.commit_used:]
shrink_amount := len(memory) - requested_size
if shrink_amount < 0 { return memory, .None }
assert(cursor(memory) == current_offset)
self.commit_used -= shrink_amount
return memory[:requested_size], .None
}
varena_reset :: #force_inline proc(self: ^VArena) {
// TODO(Ed): Prevent multiple threads from entering here extrusively?
// sync.mutex_guard( & mutex )
self.commit_used = 0
}
varena_release :: #force_inline proc(self: ^VArena) {
// TODO(Ed): Prevent multiple threads from entering here extrusively?
// sync.mutex_guard( & mutex )
virtual_release( self.vmem )
self.commit_used = 0
}
varena_rewind :: #force_inline proc(arena: ^VArena, save_point: AllocatorSP, loc := #caller_location) {
assert_contextless(save_point.type_sig == varena_allocator_proc)
assert_contextless(save_point.slot >= 0 && save_point.slot <= int(arena.commit_used))
arena.commit_used = save_point.slot
}
varena_save :: #force_inline proc(arena: ^VArena) -> AllocatorSP { return AllocatorSP { type_sig = varena_allocator_proc, slot = cast(int) arena.commit_used }}
varena_allocator_proc :: proc(input: AllocatorProc_In, output: ^AllocatorProc_Out) {
assert(output != nil)
assert(input.data != nil)
arena := transmute(^VArena) input.data
switch input.op {
case .Alloc, .Alloc_NoZero:
output.allocation, output.error = varena_alloc(arena, input.requested_size, input.alignment, input.op == .Alloc, input.loc)
return
case .Free:
output.error = .Mode_Not_Implemented
case .Reset:
varena_reset(arena)
case .Grow, .Grow_NoZero:
output.allocation, output.error = varena_grow(arena, input.old_allocation, input.requested_size, input.alignment, input.op == .Grow, input.loc)
case .Shrink:
output.allocation, output.error = varena_shrink(arena, input.old_allocation, input.requested_size)
case .Rewind:
varena_rewind(arena, input.save_point)
case .SavePoint:
output.save_point = varena_save(arena)
case .Query:
output.features = {.Alloc, .Reset, .Grow, .Shrink, .Rewind}
output.max_alloc = int(arena.reserved) - arena.commit_used
output.min_alloc = 0
output.left = output.max_alloc
output.save_point = varena_save(arena)
}
}
varena_odin_allocator_proc :: proc(
allocator_data : rawptr,
mode : Odin_AllocatorMode,
size : int,
alignment : int,
old_memory : rawptr,
old_size : int,
location : SourceCodeLocation = #caller_location
) -> (data: []byte, alloc_error: AllocatorError)
{
arena := transmute( ^VArena) allocator_data
page_size := uint(virtual_get_page_size())
switch mode {
case .Alloc, .Alloc_Non_Zeroed:
data, alloc_error = varena_alloc( arena, size, alignment, (mode == .Alloc), location )
return
case .Free:
alloc_error = .Mode_Not_Implemented
case .Free_All:
varena_reset( arena )
case .Resize, .Resize_Non_Zeroed:
if size > old_size do varena_grow (arena, slice(cursor(old_memory), old_size), size, alignment, (mode == .Alloc), location)
else do varena_shrink(arena, slice(cursor(old_memory), old_size), size, location)
case .Query_Features:
set := cast( ^Odin_AllocatorModeSet) old_memory
if set != nil do (set ^) = {.Alloc, .Alloc_Non_Zeroed, .Free_All, .Resize, .Query_Features}
case .Query_Info:
info := (^Odin_AllocatorQueryInfo)(old_memory)
info.pointer = transmute(rawptr) varena_save(arena).slot
info.size = cast(int) arena.reserved
info.alignment = MEMORY_ALIGNMENT_DEFAULT
return to_bytes(info), nil
}
return
}
varena_odin_allocator :: proc(arena: ^VArena) -> (allocator: Odin_Allocator) {
allocator.procedure = varena_odin_allocator_proc
allocator.data = arena
return
}
when ODIN_DEBUG {
varena_ainfo :: #force_inline proc "contextless" (arena: ^VArena) -> AllocatorInfo { return AllocatorInfo{proc_id = .VArena, data = arena} }
varena_allocator :: #force_inline proc "contextless" (arena: ^VArena) -> Odin_Allocator { return transmute(Odin_Allocator) AllocatorInfo{proc_id = .VArena, data = arena} }
}
else {
varena_ainfo :: #force_inline proc "contextless" (arena: ^VArena) -> AllocatorInfo { return AllocatorInfo{procedure = varena_allocator_proc, data = arena} }
varena_allocator :: #force_inline proc "contextless" (arena: ^VArena) -> Odin_Allocator { return transmute(Odin_Allocator) AllocatorInfo{procedure = varena_allocator_proc, data = arena} }
}
varena_push_item :: #force_inline proc(va: ^VArena, $Type: typeid, alignment: int = MEMORY_ALIGNMENT_DEFAULT, should_zero := true, location := #caller_location
) -> (^Type, AllocatorError) {
raw, error := varena_alloc(va, size_of(Type), alignment, should_zero, location)
return transmute(^Type) cursor(raw), error
}
varena_push_slice :: #force_inline proc(va: ^VArena, $Type: typeid, amount: int, alignment: int = MEMORY_ALIGNMENT_DEFAULT, should_zero := true, location := #caller_location
) -> ([]Type, AllocatorError) {
raw, error := varena_alloc(va, size_of(Type) * amount, alignment, should_zero, location)
return slice(transmute([^]Type) cursor(raw), len(raw) / size_of(Type)), error
}

View File

@@ -0,0 +1,126 @@
package grime
/*
Arena (Chained Virtual Areans):
*/
ArenaFlags :: bit_set[ArenaFlag; u32]
ArenaFlag :: enum u32 {
No_Large_Pages,
No_Chaining,
}
Arena :: struct {
backing: ^VArena,
prev: ^Arena,
current: ^Arena,
base_pos: int,
pos: int,
flags: ArenaFlags,
}
arena_make :: proc(reserve_size : int = Mega * 64, commit_size : int = Mega * 64, base_addr: uintptr = 0, flags: ArenaFlags = {}) -> ^Arena {
header_size := align_pow2(size_of(Arena), MEMORY_ALIGNMENT_DEFAULT)
current, error := varena_make(reserve_size, commit_size, base_addr, transmute(VArenaFlags) flags)
assert(error == .None)
assert(current != nil)
arena: ^Arena; arena, error = varena_push_item(current, Arena, 1)
assert(error == .None)
assert(arena != nil)
arena^ = Arena {
backing = current,
prev = nil,
current = arena,
base_pos = 0,
pos = header_size,
flags = flags,
}
return arena
}
arena_alloc :: proc(arena: ^Arena, size: int, alignment: int = MEMORY_ALIGNMENT_DEFAULT) -> []byte {
assert(arena != nil)
active := arena.current
size_requested := size
size_aligned := align_pow2(size_requested, alignment)
pos_pre := active.pos
pos_pst := pos_pre + size_aligned
reserved := int(active.backing.reserved)
should_chain := (.No_Chaining not_in arena.flags) && (reserved < pos_pst)
if should_chain {
new_arena := arena_make(reserved, active.backing.commit_size, 0, transmute(ArenaFlags) active.backing.flags)
new_arena.base_pos = active.base_pos + reserved
sll_stack_push_n(& arena.current, & new_arena, & new_arena.prev)
new_arena.prev = active
active = arena.current
}
result_ptr := transmute([^]byte) (uintptr(active) + uintptr(pos_pre))
vresult, error := varena_alloc(active.backing, size_aligned, alignment)
assert(error == .None)
slice_assert(vresult)
assert(raw_data(vresult) == result_ptr)
active.pos = pos_pst
return slice(result_ptr, size)
}
arena_release :: proc(arena: ^Arena) {
assert(arena != nil)
curr := arena.current
for curr != nil {
prev := curr.prev
varena_release(curr.backing)
curr = prev
}
}
arena_reset :: proc(arena: ^Arena) {
arena_rewind(arena, AllocatorSP { type_sig = arena_allocator_proc, slot = 0 })
}
arena_rewind :: proc(arena: ^Arena, save_point: AllocatorSP) {
assert(arena != nil)
assert(save_point.type_sig == arena_allocator_proc)
header_size := align_pow2(size_of(Arena), MEMORY_ALIGNMENT_DEFAULT)
curr := arena.current
big_pos := max(header_size, save_point.slot)
// Release arenas that are beyond the save point
for curr.base_pos >= big_pos {
prev := curr.prev
varena_release(curr.backing)
curr = prev
}
arena.current = curr
new_pos := big_pos - curr.base_pos
assert(new_pos <= curr.pos)
curr.pos = new_pos
varena_rewind(curr.backing, { type_sig = varena_allocator_proc, slot = curr.pos + size_of(VArena) })
}
arena_save :: #force_inline proc(arena: ^Arena) -> AllocatorSP { return { type_sig = arena_allocator_proc, slot = arena.base_pos + arena.current.pos } }
arena_allocator_proc :: proc(input: AllocatorProc_In, output: ^AllocatorProc_Out) {
panic("not implemented")
}
arena_odin_allocator_proc :: proc(
allocator_data : rawptr,
mode : Odin_AllocatorMode,
size : int,
alignment : int,
old_memory : rawptr,
old_size : int,
location : SourceCodeLocation = #caller_location
) -> (data: []byte, alloc_error: AllocatorError)
{
panic("not implemented")
}
when ODIN_DEBUG {
arena_ainfo :: #force_inline proc "contextless" (arena: ^Arena) -> AllocatorInfo { return AllocatorInfo{proc_id = .Arena, data = arena} }
arena_allocator :: #force_inline proc "contextless" (arena: ^Arena) -> Odin_Allocator { return transmute(Odin_Allocator) AllocatorInfo{proc_id = .Arena, data = arena} }
}
else {
arena_ainfo :: #force_inline proc "contextless" (arena: ^Arena) -> AllocatorInfo { return AllocatorInfo{procedure = arena_allocator_proc, data = arena} }
arena_allocator :: #force_inline proc "contextless" (arena: ^Arena) -> Odin_Allocator { return transmute(Odin_Allocator) AllocatorInfo{procedure = arena_allocator_proc, data = arena} }
}
arena_push_item :: proc()
{
}
arena_push_array :: proc()
{
}

View File

@@ -0,0 +1,28 @@
package grime
// TODO(Ed): Review this
import "base:runtime"
// TODO(Ed): Support address sanitizer
/*
Pool allocator backed by chained virtual arenas.
*/
Pool_FreeBlock :: struct { next: ^Pool_FreeBlock }
VPool :: struct {
arenas: ^Arena,
block_size: uint,
// alignment: uint,
free_list_head: ^Pool_FreeBlock,
}
pool_make :: proc() -> (pool: VPool, error: AllocatorError)
{
panic("not implemented")
// return
}

View File

@@ -0,0 +1,15 @@
package grime
VSlabSizeClass :: struct {
vmem_reserve: uint,
block_size: uint,
block_alignment: uint,
}
Slab_Max_Size_Classes :: 24
SlabPolicy :: FStack(VSlabSizeClass, Slab_Max_Size_Classes)
VSlab :: struct {
pools: FStack(VPool, Slab_Max_Size_Classes),
}

View File

@@ -23,14 +23,14 @@ load_client_api :: proc(version_id: int) -> (loaded_module: Client_API) {
file_copy_sync( Path_Sectr_Module, Path_Sectr_Live_Module, allocator = context.temp_allocator )
did_load: bool; lib, did_load = os_lib_load( Path_Sectr_Live_Module )
if ! did_load do panic( "Failed to load the sectr module.")
startup = cast( type_of( host_memory.client_api.startup)) os_lib_get_proc(lib, "startup")
shutdown = cast( type_of( host_memory.client_api.shutdown)) os_lib_get_proc(lib, "sectr_shutdown")
tick_lane_startup = cast( type_of( host_memory.client_api.tick_lane_startup)) os_lib_get_proc(lib, "tick_lane_startup")
job_worker_startup = cast( type_of( host_memory.client_api.job_worker_startup)) os_lib_get_proc(lib, "job_worker_startup")
hot_reload = cast( type_of( host_memory.client_api.hot_reload)) os_lib_get_proc(lib, "hot_reload")
tick_lane = cast( type_of( host_memory.client_api.tick_lane)) os_lib_get_proc(lib, "tick_lane")
clean_frame = cast( type_of( host_memory.client_api.clean_frame)) os_lib_get_proc(lib, "clean_frame")
jobsys_worker_tick = cast( type_of( host_memory.client_api.jobsys_worker_tick)) os_lib_get_proc(lib, "jobsys_worker_tick")
startup = transmute( type_of( host_memory.client_api.startup)) os_lib_get_proc(lib, "startup")
shutdown = transmute( type_of( host_memory.client_api.shutdown)) os_lib_get_proc(lib, "sectr_shutdown")
tick_lane_startup = transmute( type_of( host_memory.client_api.tick_lane_startup)) os_lib_get_proc(lib, "tick_lane_startup")
job_worker_startup = transmute( type_of( host_memory.client_api.job_worker_startup)) os_lib_get_proc(lib, "job_worker_startup")
hot_reload = transmute( type_of( host_memory.client_api.hot_reload)) os_lib_get_proc(lib, "hot_reload")
tick_lane = transmute( type_of( host_memory.client_api.tick_lane)) os_lib_get_proc(lib, "tick_lane")
clean_frame = transmute( type_of( host_memory.client_api.clean_frame)) os_lib_get_proc(lib, "clean_frame")
jobsys_worker_tick = transmute( type_of( host_memory.client_api.jobsys_worker_tick)) os_lib_get_proc(lib, "jobsys_worker_tick")
if startup == nil do panic("Failed to load sectr.startup symbol" )
if shutdown == nil do panic("Failed to load sectr.shutdown symbol" )
if tick_lane_startup == nil do panic("Failed to load sectr.tick_lane_startup symbol" )
@@ -151,6 +151,8 @@ main :: proc()
if thread_memory.id == .Master_Prepper {
thread_join_multiple(.. host_memory.threads[1:THREAD_TICK_LANES + THREAD_JOB_WORKERS])
}
host_memory.client_api.shutdown();
unload_client_api( & host_memory.client_api )
@@ -271,7 +273,6 @@ host_job_worker_entrypoint :: proc(worker_thread: ^SysThread)
leader := barrier_wait(& host_memory.lane_job_sync)
}
@export
sync_client_api :: proc()
{
profile(#procedure)

View File

@@ -83,6 +83,10 @@ import grime "codebase:grime"
grime_set_profiler_module_context :: grime.set_profiler_module_context
grime_set_profiler_thread_buffer :: grime.set_profiler_thread_buffer
ensure :: grime.ensure
fatal :: grime.fatal
verify :: grime.verify
file_is_locked :: grime.file_is_locked
logger_init :: grime.logger_init
to_odin_logger :: grime.to_odin_logger
@@ -137,24 +141,24 @@ import "codebase:sectr"
ThreadMemory :: sectr.ThreadMemory
WorkerID :: sectr.WorkerID
ensure :: #force_inline proc( condition : b32, msg : string, location := #caller_location ) {
if condition do return
log_print( msg, LoggerLevel.Warning, location )
debug_trap()
}
// TODO(Ed) : Setup exit codes!
fatal :: #force_inline proc( msg : string, exit_code : int = -1, location := #caller_location ) {
log_print( msg, LoggerLevel.Fatal, location )
debug_trap()
process_exit( exit_code )
}
// TODO(Ed) : Setup exit codes!
verify :: #force_inline proc( condition : b32, msg : string, exit_code : int = -1, location := #caller_location ) {
if condition do return
log_print( msg, LoggerLevel.Fatal, location )
debug_trap()
process_exit( exit_code )
}
// ensure :: #force_inline proc( condition : b32, msg : string, location := #caller_location ) {
// if condition do return
// log_print( msg, LoggerLevel.Warning, location )
// debug_trap()
// }
// // TODO(Ed) : Setup exit codes!
// fatal :: #force_inline proc( msg : string, exit_code : int = -1, location := #caller_location ) {
// log_print( msg, LoggerLevel.Fatal, location )
// debug_trap()
// process_exit( exit_code )
// }
// // TODO(Ed) : Setup exit codes!
// verify :: #force_inline proc( condition : b32, msg : string, exit_code : int = -1, location := #caller_location ) {
// if condition do return
// log_print( msg, LoggerLevel.Fatal, location )
// debug_trap()
// process_exit( exit_code )
// }
log_print :: proc( msg : string, level := LoggerLevel.Info, loc := #caller_location ) {
context.allocator = arena_allocator(& host_memory.host_scratch)

View File

@@ -35,12 +35,11 @@ then prepare for multi-threaded "laned" tick: thread_wide_startup.
@export
startup :: proc(host_mem: ^ProcessMemory, thread_mem: ^ThreadMemory)
{
// Rad Debugger driving me crazy..
// NOTE(Ed): This is not necessary, they're just loops for my sanity.
for ; memory == nil; { memory = host_mem }
for ; thread == nil; { thread = thread_mem }
grime_set_profiler_module_context(& memory.spall_context)
grime_set_profiler_thread_buffer(& thread.spall_buffer)
// (Ignore RAD Debugger's values being null)
memory = host_mem
thread = thread_mem
// grime_set_profiler_module_context(& memory.spall_context)
// grime_set_profiler_thread_buffer(& thread.spall_buffer)
profile(#procedure)
startup_tick := tick_now()
@@ -101,7 +100,8 @@ startup :: proc(host_mem: ^ProcessMemory, thread_mem: ^ThreadMemory)
log_print_fmt("Startup time: %v ms", startup_ms)
}
// For some reason odin's symbols conflict with native foreign symbols...
// NOTE(Ed): For some reason odin's symbols conflict with native foreign symbols...
// Called in host.main after all tick lane or job worker threads have joined.
@export
sectr_shutdown :: proc()
{
@@ -126,14 +126,14 @@ hot_reload :: proc(host_mem: ^ProcessMemory, thread_mem: ^ThreadMemory)
thread = thread_mem
if thread.id == .Master_Prepper {
sync_store(& memory, host_mem, .Release)
grime_set_profiler_module_context(& memory.spall_context)
// grime_set_profiler_module_context(& memory.spall_context)
}
else {
// NOTE(Ed): This is problably not necessary, they're just loops for my sanity.
for ; memory == nil; { sync_load(& memory, .Acquire) }
for ; thread == nil; { thread = thread_mem }
}
grime_set_profiler_thread_buffer(& thread.spall_buffer)
// grime_set_profiler_thread_buffer(& thread.spall_buffer)
}
profile(#procedure)
// Do hot-reload stuff...
@@ -177,7 +177,7 @@ tick_lane_startup :: proc(thread_mem: ^ThreadMemory)
{
if thread_mem.id != .Master_Prepper {
thread = thread_mem
grime_set_profiler_thread_buffer(& thread.spall_buffer)
// grime_set_profiler_thread_buffer(& thread.spall_buffer)
}
profile(#procedure)
}
@@ -187,7 +187,7 @@ job_worker_startup :: proc(thread_mem: ^ThreadMemory)
{
if thread_mem.id != .Master_Prepper {
thread = thread_mem
grime_set_profiler_thread_buffer(& thread.spall_buffer)
// grime_set_profiler_thread_buffer(& thread.spall_buffer)
}
profile(#procedure)
}

View File

@@ -2,10 +2,13 @@ package sectr
import sokol_app "thirdparty:sokol/app"
//region Sokol App
sokol_app_init_callback :: proc "c" () {
context = memory.client_memory.sokol_context
log_print("sokol_app: Confirmed initialization")
}
// This is being filled in but we're directly controlling the lifetime of sokol_app's execution.
// So this will only get called during window pan or resize events (on Win32 at least)
sokol_app_frame_callback :: proc "c" ()
@@ -37,3 +40,220 @@ sokol_app_frame_callback :: proc "c" ()
tick_lane_frametime( & client_tick, sokol_delta_ms, sokol_delta_ns, can_sleep = false )
window.resized = false
}
sokol_app_cleanup_callback :: proc "c" () {
context = memory.client_memory.sokol_context
log_print("sokol_app: Confirmed cleanup")
}
sokol_app_alloc :: proc "c" ( size : uint, user_data : rawptr ) -> rawptr {
context = memory.client_memory.sokol_context
// block, error := mem_alloc( int(size), allocator = persistent_slab_allocator() )
// ensure(error == AllocatorError.None, "sokol_app allocation failed")
// return block
// TODO(Ed): Implement
return nil
}
sokol_app_free :: proc "c" ( data : rawptr, user_data : rawptr ) {
context = memory.client_memory.sokol_context
// mem_free(data, allocator = persistent_slab_allocator() )
// TODO(Ed): Implement
}
sokol_app_log_callback :: proc "c" (
tag: cstring,
log_level: u32,
log_item_id: u32,
message_or_null: cstring,
line_nr: u32,
filename_or_null: cstring,
user_data: rawptr)
{
context = memory.client_memory.sokol_context
odin_level: LoggerLevel
switch log_level {
case 0: odin_level = .Fatal
case 1: odin_level = .Error
case 2: odin_level = .Warning
case 3: odin_level = .Info
}
clone_backing: [16 * Kilo]byte
cloned_msg: string = "";
if message_or_null != nil {
cloned_msg = cstr_to_str_capped(message_or_null, clone_backing[:])
}
cloned_fname: string = ""
if filename_or_null != nil {
cloned_fname = cstr_to_str_capped(filename_or_null, clone_backing[len(cloned_msg):])
}
cloned_tag := cstr_to_str_capped(tag, clone_backing[len(cloned_msg) + len(cloned_fname):])
log_print_fmt( "%-80s %s::%v", cloned_msg, cloned_tag, line_nr, level = odin_level )
}
// TODO(Ed): Does this need to be queued to a separate thread?
sokol_app_event_callback :: proc "c" (sokol_event: ^sokol_app.Event)
{
context = memory.client_memory.sokol_context
event: InputEvent
using event
_sokol_frame_id = sokol_event.frame_count
frame_id = get_frametime().current_frame
mouse.pos = { sokol_event.mouse_x, sokol_event.mouse_y }
mouse.delta = { sokol_event.mouse_dx, sokol_event.mouse_dy }
switch sokol_event.type
{
case .INVALID:
log_print_fmt("sokol_app - event: INVALID?")
log_print_fmt("%v", sokol_event)
case .KEY_DOWN:
if sokol_event.key_repeat do return
type = .Key_Pressed
key = to_key_from_sokol( sokol_event.key_code )
modifiers = to_modifiers_code_from_sokol( sokol_event.modifiers )
sokol_app.consume_event()
append_staged_input_events( event )
// logf("Key pressed(sokol): %v", key)
// logf("frame (sokol): %v", frame_id )
case .KEY_UP:
if sokol_event.key_repeat do return
type = .Key_Released
key = to_key_from_sokol( sokol_event.key_code )
modifiers = to_modifiers_code_from_sokol( sokol_event.modifiers )
sokol_app.consume_event()
append_staged_input_events( event )
// logf("Key released(sokol): %v", key)
// logf("frame (sokol): %v", frame_id )
case .CHAR:
if sokol_event.key_repeat do return
type = .Unicode
codepoint = transmute(rune) sokol_event.char_code
modifiers = to_modifiers_code_from_sokol( sokol_event.modifiers )
sokol_app.consume_event()
append_staged_input_events( event )
case .MOUSE_DOWN:
type = .Mouse_Pressed
mouse.btn = to_mouse_btn_from_sokol( sokol_event.mouse_button )
modifiers = to_modifiers_code_from_sokol( sokol_event.modifiers )
sokol_app.consume_event()
append_staged_input_events( event )
case .MOUSE_UP:
type = .Mouse_Released
mouse.btn = to_mouse_btn_from_sokol( sokol_event.mouse_button )
modifiers = to_modifiers_code_from_sokol( sokol_event.modifiers )
sokol_app.consume_event()
append_staged_input_events( event )
case .MOUSE_SCROLL:
type = .Mouse_Scroll
mouse.scroll = { sokol_event.scroll_x, sokol_event.scroll_y }
modifiers = to_modifiers_code_from_sokol( sokol_event.modifiers )
sokol_app.consume_event()
append_staged_input_events( event )
case .MOUSE_MOVE:
type = .Mouse_Move
modifiers = to_modifiers_code_from_sokol( sokol_event.modifiers )
sokol_app.consume_event()
append_staged_input_events( event )
case .MOUSE_ENTER:
type = .Mouse_Enter
modifiers = to_modifiers_code_from_sokol( sokol_event.modifiers )
sokol_app.consume_event()
append_staged_input_events( event )
case .MOUSE_LEAVE:
type = .Mouse_Leave
modifiers = to_modifiers_code_from_sokol( sokol_event.modifiers )
sokol_app.consume_event()
append_staged_input_events( event )
// TODO(Ed): Add support
case .TOUCHES_BEGAN:
case .TOUCHES_MOVED:
case .TOUCHES_ENDED:
case .TOUCHES_CANCELLED:
case .RESIZED: sokol_app.consume_event()
case .ICONIFIED: sokol_app.consume_event()
case .RESTORED: sokol_app.consume_event()
case .FOCUSED: sokol_app.consume_event()
case .UNFOCUSED: sokol_app.consume_event()
case .SUSPENDED: sokol_app.consume_event()
case .RESUMED: sokol_app.consume_event()
case .QUIT_REQUESTED: sokol_app.consume_event()
case .CLIPBOARD_PASTED: sokol_app.consume_event()
case .FILES_DROPPED: sokol_app.consume_event()
case .DISPLAY_CHANGED:
log_print_fmt("sokol_app - event: Display changed")
log_print_fmt("refresh rate: %v", sokol_app.refresh_rate())
monitor_refresh_hz := sokol_app.refresh_rate()
sokol_app.consume_event()
}
}
//endregion Sokol App
//region Sokol GFX
sokol_gfx_alloc :: proc "c" ( size : uint, user_data : rawptr ) -> rawptr {
context = memory.client_memory.sokol_context
// block, error := mem_alloc( int(size), allocator = persistent_slab_allocator() )
// ensure(error == AllocatorError.None, "sokol_gfx allocation failed")
// return block
// TODO(Ed): Implement
return nil
}
sokol_gfx_free :: proc "c" ( data : rawptr, user_data : rawptr ) {
context = memory.client_memory.sokol_context
// TODO(Ed): Implement
// free(data, allocator = persistent_slab_allocator() )
}
sokol_gfx_log_callback :: proc "c" (
tag: cstring,
log_level: u32,
log_item_id: u32,
message_or_null: cstring,
line_nr: u32,
filename_or_null: cstring,
user_data: rawptr)
{
context = memory.client_memory.sokol_context
odin_level : LoggerLevel
switch log_level {
case 0: odin_level = .Fatal
case 1: odin_level = .Error
case 2: odin_level = .Warning
case 3: odin_level = .Info
}
clone_backing: [16 * Kilo]byte
cloned_msg : string = ""
if message_or_null != nil {
cloned_msg = cstr_to_str_capped(message_or_null, clone_backing[:])
}
cloned_fname : string = ""
if filename_or_null != nil {
cloned_fname = cstr_to_str_capped(filename_or_null, clone_backing[len(cloned_msg):])
}
cloned_tag := cstr_to_str_capped(tag, clone_backing[len(cloned_msg) + len(cloned_fname):])
log_print_fmt( "%-80s %s::%v", cloned_msg, cloned_tag, line_nr, level = odin_level )
}
//endregion Sokol GFX

View File

@@ -0,0 +1,90 @@
package sectr
InputBindSig :: distinct u128
InputBind :: struct {
keys: [4]KeyCode,
mouse_btns: [4]MouseBtn,
scroll: [2]AnalogAxis,
modifiers: ModifierCodeFlags,
label: string,
}
InputBindStatus :: struct {
detected: b32,
consumed: b32,
frame_id: u64,
}
InputActionProc :: #type proc(user_ptr: rawptr)
InputAction :: struct {
id: int,
user_ptr: rawptr,
cb: InputActionProc,
always: b32,
}
InputContext :: struct {
binds: []InputBind,
status: []InputBindStatus,
onpush_action: []InputAction,
onpop_action: []InputAction,
signature: []InputBindSig,
}
inputbind_signature :: proc(binding: InputBind) -> InputBindSig {
// TODO(Ed): Figure out best hasher for this...
return cast(InputBindSig) 0
}
// Note(Ed): Bindings should be remade for a context when a user modifies any in configuration.
inputcontext_init :: proc(ctx: ^InputContext, binds: []InputBind, onpush: []InputAction = {}, onpop: []InputAction = {}) {
ctx.binds = binds
ctx.onpush_action = onpush
ctx.onpop_action = onpop
for bind, id in ctx.binds {
ctx.signature[id] = inputbind_signature(bind)
}
}
inputcontext_make :: #force_inline proc(binds: []InputBind, onpush: []InputAction = {}, onpop: []InputAction = {}) -> InputContext {
ctx: InputContext; inputcontext_init(& ctx, binds, onpush, onpop); return ctx
}
// Should be called by the user explicitly during frame cleanup.
inputcontext_clear_status :: #force_inline proc "contextless" (ctx: ^InputContext) {
zero(ctx.status)
}
inputbinding_status :: #force_inline proc(id: int) -> InputBindStatus {
return get_input_binds().status[id]
}
inputcontext_inherit :: proc(dst: ^InputContext, src: ^InputContext) {
for dst_id, dst_sig in dst.signature
{
for src_id, src_sig in src.signature
{
if dst_sig != src_sig {
continue
}
dst.status[dst_id] = src.status[src_id]
}
}
}
inputcontext_push :: proc(ctx: ^InputContext, dont_inherit_status: b32 = false) {
// push context stack
// clear binding status for context
// optionally inherit status
// detect status
// Dispatch push actions meeting conditions
}
inputcontext_pop :: proc(ctx: ^InputContext, dont_inherit_status: b32 = false) {
// Dispatch pop actions meeting conditions
// parent inherit consumed statuses
// pop context stack
}

View File

@@ -0,0 +1,286 @@
package sectr
InputEventType :: enum u32 {
Key_Pressed,
Key_Released,
Mouse_Pressed,
Mouse_Released,
Mouse_Scroll,
Mouse_Move,
Mouse_Enter,
Mouse_Leave,
Unicode,
}
InputEvent :: struct
{
frame_id : u64,
type : InputEventType,
key : KeyCode,
modifiers : ModifierCodeFlags,
mouse : struct {
btn : MouseBtn,
pos : V2_F4,
delta : V2_F4,
scroll : V2_F4,
},
codepoint : rune,
// num_touches : u32,
// touches : Touchpoint,
_sokol_frame_id : u64,
}
// TODO(Ed): May just use input event exclusively in the future and have pointers for key and mouse event filters
// I'm on the fence about this as I don't want to force
InputKeyEvent :: struct {
frame_id : u64,
type : InputEventType,
key : KeyCode,
modifiers : ModifierCodeFlags,
}
InputMouseEvent :: struct {
frame_id : u64,
type : InputEventType,
btn : MouseBtn,
pos : V2_F4,
delta : V2_F4,
scroll : V2_F4,
modifiers : ModifierCodeFlags,
}
// Lets see if we need more than this..
InputEvents :: struct {
events : FRingBuffer(InputEvent, 64),
key_events : FRingBuffer(InputKeyEvent, 32),
mouse_events : FRingBuffer(InputMouseEvent, 32),
codes_pressed : Array(rune),
}
// Note(Ed): There is a staged_input_events : Array(InputEvent), in the state.odin's State struct
append_staged_input_events :: #force_inline proc(event: InputEvent) {
append( & memory.client_memory.staged_input_events, event )
}
pull_staged_input_events :: proc( input: ^InputState, using input_events: ^InputEvents, using staged_events : Array(InputEvent) )
{
staged_events_slice := array_to_slice(staged_events)
push( & input_events.events, staged_events_slice )
// using input_events
for event in staged_events_slice
{
switch event.type {
case .Key_Pressed:
push( & key_events, InputKeyEvent {
frame_id = event.frame_id,
type = event.type,
key = event.key,
modifiers = event.modifiers
})
// logf("Key pressed(event pushed): %v", event.key)
// logf("last key event frame: %v", peek_back(& key_events).frame_id)
// logf("last event frame: %v", peek_back(& events).frame_id)
case .Key_Released:
push( & key_events, InputKeyEvent {
frame_id = event.frame_id,
type = event.type,
key = event.key,
modifiers = event.modifiers
})
// logf("Key released(event rpushed): %v", event.key)
// logf("last key event frame: %v", peek_back(& key_events).frame_id)
// logf("last event frame: %v", peek_back(& events).frame_id)
case .Unicode:
append( & codes_pressed, event.codepoint )
case .Mouse_Pressed:
push( & mouse_events, InputMouseEvent {
frame_id = event.frame_id,
type = event.type,
btn = event.mouse.btn,
pos = event.mouse.pos,
delta = event.mouse.delta,
scroll = event.mouse.scroll,
modifiers = event.modifiers,
})
case .Mouse_Released:
push( & mouse_events, InputMouseEvent {
frame_id = event.frame_id,
type = event.type,
btn = event.mouse.btn,
pos = event.mouse.pos,
delta = event.mouse.delta,
scroll = event.mouse.scroll,
modifiers = event.modifiers,
})
case .Mouse_Scroll:
push( & mouse_events, InputMouseEvent {
frame_id = event.frame_id,
type = event.type,
btn = event.mouse.btn,
pos = event.mouse.pos,
delta = event.mouse.delta,
scroll = event.mouse.scroll,
modifiers = event.modifiers,
})
// logf("Detected scroll: %v", event.mouse.scroll)
case .Mouse_Move:
push( & mouse_events, InputMouseEvent {
frame_id = event.frame_id,
type = event.type,
btn = event.mouse.btn,
pos = event.mouse.pos,
delta = event.mouse.delta,
scroll = event.mouse.scroll,
modifiers = event.modifiers,
})
case .Mouse_Enter:
push( & mouse_events, InputMouseEvent {
frame_id = event.frame_id,
type = event.type,
btn = event.mouse.btn,
pos = event.mouse.pos,
delta = event.mouse.delta,
scroll = event.mouse.scroll,
modifiers = event.modifiers,
})
case .Mouse_Leave:
push( & mouse_events, InputMouseEvent {
frame_id = event.frame_id,
type = event.type,
btn = event.mouse.btn,
pos = event.mouse.pos,
delta = event.mouse.delta,
scroll = event.mouse.scroll,
modifiers = event.modifiers,
})
}
}
clear( staged_events )
}
poll_input_events :: proc( input, prev_input : ^InputState, input_events : InputEvents )
{
input.keyboard = {}
input.mouse = {}
// logf("m's value is: %v (prev)", prev_input.keyboard.keys[KeyCode.M] )
for prev_key, id in prev_input.keyboard.keys {
input.keyboard.keys[id].ended_down = prev_key.ended_down
}
for prev_btn, id in prev_input.mouse.btns {
input.mouse.btns[id].ended_down = prev_btn.ended_down
}
input.mouse.raw_pos = prev_input.mouse.raw_pos
input.mouse.pos = prev_input.mouse.pos
input_events := input_events
using input_events
@static prev_frame : u64 = 0
last_frame : u64 = 0
if events.num > 0 {
last_frame = peek_back( events).frame_id
}
// No new events, don't update
if last_frame == prev_frame do return
Iterate_Key_Events:
{
iter_obj := iterator( & key_events ); iter := & iter_obj
for event := next( iter ); event != nil; event = next( iter )
{
// logf("last_frame (iter): %v", last_frame)
// logf("frame (iter): %v", event.frame_id )
if last_frame > event.frame_id {
break
}
key := & input.keyboard.keys[event.key]
prev_key := prev_input.keyboard.keys[event.key]
// logf("key event: %v", event)
first_transition := key.half_transitions == 0
#partial switch event.type {
case .Key_Pressed:
key.half_transitions += 1
key.ended_down = true
case .Key_Released:
key.half_transitions += 1
key.ended_down = false
}
}
}
Iterate_Mouse_Events:
{
iter_obj := iterator( & mouse_events ); iter := & iter_obj
for event := next( iter ); event != nil; event = next( iter )
{
if last_frame > event.frame_id {
break
}
process_digital_btn :: proc( btn : ^DigitalBtn, prev_btn : DigitalBtn, ended_down : b32 )
{
first_transition := btn.half_transitions == 0
btn.half_transitions += 1
btn.ended_down = ended_down
}
// log_print_fmt("mouse event: %v", event)
#partial switch event.type {
case .Mouse_Pressed:
btn := & input.mouse.btns[event.btn]
prev_btn := prev_input.mouse.btns[event.btn]
process_digital_btn( btn, prev_btn, true )
case .Mouse_Released:
btn := & input.mouse.btns[event.btn]
prev_btn := prev_input.mouse.btns[event.btn]
process_digital_btn( btn, prev_btn, false )
case .Mouse_Scroll:
input.mouse.scroll += event.scroll
case .Mouse_Move:
case .Mouse_Enter:
case .Mouse_Leave:
// Handled below
}
input.mouse.raw_pos = event.pos
input.mouse.pos = render_to_screen_pos( event.pos, memory.client_memory.app_window.extent )
input.mouse.delta = event.delta * { 1, -1 }
}
}
prev_frame = last_frame
}
input_event_iter :: #force_inline proc () -> FRingBufferIterator(InputEvent) {
return iterator_ringbuf_fixed( & memory.client_memory.input_events.events )
}
input_key_event_iter :: #force_inline proc() -> FRingBufferIterator(InputKeyEvent) {
return iterator_ringbuf_fixed( & memory.client_memory.input_events.key_events )
}
input_mouse_event_iter :: #force_inline proc() -> FRingBufferIterator(InputMouseEvent) {
return iterator_ringbuf_fixed( & memory.client_memory.input_events.mouse_events )
}
input_codes_pressed_slice :: #force_inline proc() -> []rune {
return to_slice( memory.client_memory.input_events.codes_pressed )
}

View File

@@ -0,0 +1,186 @@
// TODO(Ed) : This if its gets larget can be moved to its own package
package sectr
import "base:runtime"
AnalogAxis :: f32
AnalogStick :: struct {
X, Y : f32
}
DigitalBtn :: struct {
half_transitions : i32,
ended_down : b32,
}
btn_pressed :: #force_inline proc "contextless" (btn: DigitalBtn) -> b32 { return btn.ended_down && btn.half_transitions > 0 }
btn_released :: #force_inline proc "contextless" (btn: DigitalBtn) -> b32 { return btn.ended_down == false && btn.half_transitions > 0 }
MaxMouseBtns :: 16
MouseBtn :: enum u32 {
Left = 0x0,
Middle = 0x1,
Right = 0x2,
Side = 0x3,
Forward = 0x4,
Back = 0x5,
Extra = 0x6,
Invalid = 0x100,
count
}
KeyboardState :: struct #raw_union {
keys : [KeyCode.count] DigitalBtn,
using individual : struct {
null : DigitalBtn, // 0x00
ignored : DigitalBtn, // 0x01
// GFLW / Sokol
menu,
world_1, world_2 : DigitalBtn,
// 0x02 - 0x04
__0x05_0x07_Unassigned__ : [ 3 * size_of( DigitalBtn)] u8,
tab, backspace : DigitalBtn,
// 0x08 - 0x09
right, left, up, down : DigitalBtn,
// 0x0A - 0x0D
enter : DigitalBtn, // 0x0E
__0x0F_Unassigned__ : [ 1 * size_of( DigitalBtn)] u8,
caps_lock,
scroll_lock,
num_lock : DigitalBtn,
// 0x10 - 0x12
left_alt,
left_shift,
left_control,
right_alt,
right_shift,
right_control : DigitalBtn,
// 0x13 - 0x18
print_screen,
pause,
escape,
home,
end,
page_up,
page_down,
space : DigitalBtn,
// 0x19 - 0x20
exlamation,
quote_dbl,
hash,
dollar,
percent,
ampersand,
quote,
paren_open,
paren_close,
asterisk,
plus,
comma,
minus,
period,
slash : DigitalBtn,
// 0x21 - 0x2F
nrow_0, // 0x30
nrow_1, // 0x31
nrow_2, // 0x32
nrow_3, // 0x33
nrow_4, // 0x34
nrow_5, // 0x35
nrow_6, // 0x36
nrow_7, // 0x37
nrow_8, // 0x38
nrow_9, // 0x39
__0x3A_Unassigned__ : [ 1 * size_of(DigitalBtn)] u8,
semicolon,
less,
equals,
greater,
question,
at : DigitalBtn,
A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, V, W, X, Y, Z : DigitalBtn,
bracket_open,
backslash,
bracket_close,
underscore,
backtick : DigitalBtn,
kpad_0,
kpad_1,
kpad_2,
kpad_3,
kpad_4,
kpad_5,
kpad_6,
kpad_7,
kpad_8,
kpad_9,
kpad_decimal,
kpad_equals,
kpad_plus,
kpad_minus,
kpad_multiply,
kpad_divide,
kpad_enter : DigitalBtn,
F1, F2, F3, F4, F5, F6, F7, F8, F9, F10, F11, F12 : DigitalBtn,
insert, delete : DigitalBtn,
F13, F14, F15, F16, F17, F18, F19, F20, F21, F22, F23, F24, F25 : DigitalBtn,
}
}
ModifierCode :: enum u32 {
Shift,
Control,
Alt,
Left_Mouse,
Right_Mouse,
Middle_Mouse,
Left_Shift,
Right_Shift,
Left_Control,
Right_Control,
Left_Alt,
Right_Alt,
}
ModifierCodeFlags :: bit_set[ModifierCode; u32]
MouseState :: struct {
using _ : struct #raw_union {
btns : [16] DigitalBtn,
using individual : struct {
left, middle, right : DigitalBtn,
side, forward, back, extra : DigitalBtn,
}
},
raw_pos, pos, delta : V2_F4,
scroll : [2]AnalogAxis,
}
mouse_world_delta :: #force_inline proc "contextless" (mouse_delta: V2_F4, cam: ^Camera) -> V2_F4 {
return mouse_delta * ( 1 / cam.zoom )
}
InputState :: struct {
keyboard : KeyboardState,
mouse : MouseState,
}

View File

@@ -0,0 +1,84 @@
package sectr
import "base:runtime"
import "core:os"
import "core:c/libc"
import sokol_app "thirdparty:sokol/app"
to_modifiers_code_from_sokol :: proc( sokol_modifiers : u32 ) -> ( modifiers : ModifierCodeFlags )
{
if sokol_modifiers & sokol_app.MODIFIER_SHIFT != 0 do modifiers |= { .Shift }
if sokol_modifiers & sokol_app.MODIFIER_CTRL != 0 do modifiers |= { .Control }
if sokol_modifiers & sokol_app.MODIFIER_ALT != 0 do modifiers |= { .Alt }
if sokol_modifiers & sokol_app.MODIFIER_LMB != 0 do modifiers |= { .Left_Mouse }
if sokol_modifiers & sokol_app.MODIFIER_RMB != 0 do modifiers |= { .Right_Mouse }
if sokol_modifiers & sokol_app.MODIFIER_MMB != 0 do modifiers |= { .Middle_Mouse }
if sokol_modifiers & sokol_app.MODIFIER_LSHIFT != 0 do modifiers |= { .Left_Shift }
if sokol_modifiers & sokol_app.MODIFIER_RSHIFT != 0 do modifiers |= { .Right_Shift }
if sokol_modifiers & sokol_app.MODIFIER_LCTRL != 0 do modifiers |= { .Left_Control }
if sokol_modifiers & sokol_app.MODIFIER_RCTRL != 0 do modifiers |= { .Right_Control }
if sokol_modifiers & sokol_app.MODIFIER_LALT != 0 do modifiers |= { .Left_Alt }
if sokol_modifiers & sokol_app.MODIFIER_RALT != 0 do modifiers |= { .Right_Alt }
return
}
to_key_from_sokol :: proc( sokol_key : sokol_app.Keycode ) -> ( key : KeyCode )
{
world_code_offset :: i32(sokol_app.Keycode.WORLD_1) - i32(KeyCode.world_1)
arrow_code_offset :: i32(sokol_app.Keycode.RIGHT) - i32(KeyCode.right)
func_row_code_offset :: i32(sokol_app.Keycode.F1) - i32(KeyCode.F1)
func_extra_code_offset :: i32(sokol_app.Keycode.F13) - i32(KeyCode.F25)
keypad_num_offset :: i32(sokol_app.Keycode.KP_0) - i32(KeyCode.kpad_0)
switch sokol_key {
case .INVALID ..= .GRAVE_ACCENT : key = transmute(KeyCode) sokol_key
case .WORLD_1, .WORLD_2 : key = transmute(KeyCode) (i32(sokol_key) - world_code_offset)
case .ESCAPE : key = .escape
case .ENTER : key = .enter
case .TAB : key = .tab
case .BACKSPACE : key = .backspace
case .INSERT : key = .insert
case .DELETE : key = .delete
case .RIGHT ..= .UP : key = transmute(KeyCode) (i32(sokol_key) - arrow_code_offset)
case .PAGE_UP : key = .page_up
case .PAGE_DOWN : key = .page_down
case .HOME : key = .home
case .END : key = .end
case .CAPS_LOCK : key = .caps_lock
case .SCROLL_LOCK : key = .scroll_lock
case .NUM_LOCK : key = .num_lock
case .PRINT_SCREEN : key = .print_screen
case .PAUSE : key = .pause
case .F1 ..= .F12 : key = transmute(KeyCode) (i32(sokol_key) - func_row_code_offset)
case .F13 ..= .F25 : key = transmute(KeyCode) (i32(sokol_key) - func_extra_code_offset)
case .KP_0 ..= .KP_9 : key = transmute(KeyCode) (i32(sokol_key) - keypad_num_offset)
case .KP_DECIMAL : key = .kpad_decimal
case .KP_DIVIDE : key = .kpad_divide
case .KP_MULTIPLY : key = .kpad_multiply
case .KP_SUBTRACT : key = .kpad_minus
case .KP_ADD : key = .kpad_plus
case .KP_ENTER : key = .kpad_enter
case .KP_EQUAL : key = .kpad_equals
case .LEFT_SHIFT : key = .left_shift
case .LEFT_CONTROL : key = .left_control
case .LEFT_ALT : key = .left_alt
case .LEFT_SUPER : key = .ignored
case .RIGHT_SHIFT : key = .right_shift
case .RIGHT_CONTROL : key = .right_control
case .RIGHT_ALT : key = .right_alt
case .RIGHT_SUPER : key = .ignored
case .MENU : key = .menu
}
return
}
to_mouse_btn_from_sokol :: proc( sokol_mouse : sokol_app.Mousebutton ) -> ( btn : MouseBtn )
{
switch sokol_mouse {
case .LEFT : btn = .Left
case .MIDDLE : btn = .Middle
case .RIGHT : btn = .Right
case .INVALID : btn = .Invalid
}
return
}

View File

@@ -0,0 +1,239 @@
package sectr
// Based off of SDL2's Scancode; which is based off of:
// https://usb.org/sites/default/files/hut1_12.pdf
// I gutted values I would never use
QeurtyCode :: enum u32 {
unknown = 0,
A = 4,
B = 5,
C = 6,
D = 7,
E = 8,
F = 9,
G = 10,
H = 11,
I = 12,
J = 13,
K = 14,
L = 15,
M = 16,
N = 17,
O = 18,
P = 19,
Q = 20,
R = 21,
S = 22,
T = 23,
U = 24,
V = 25,
W = 26,
X = 27,
Y = 28,
Z = 29,
nrow_1 = 30,
nrow_2 = 31,
nrow_3 = 32,
nrow_4 = 33,
nrow_5 = 34,
nrow_6 = 35,
nrow_7 = 36,
nrow_8 = 37,
nrow_9 = 38,
nrow_0 = 39,
enter = 40,
escape = 41,
backspace = 42,
tab = 43,
space = 44,
minus = 45,
equals = 46,
bracket_open = 47,
bracket_close = 48,
backslash = 49,
NONUSHASH = 50,
semicolon = 51,
apostrophe = 52,
grave = 53,
comma = 54,
period = 55,
slash = 56,
capslock = 57,
F1 = 58,
F2 = 59,
F3 = 60,
F4 = 61,
F5 = 62,
F6 = 63,
F7 = 64,
F8 = 65,
F9 = 66,
F10 = 67,
F11 = 68,
F12 = 69,
// print_screen = 70,
// scroll_lock = 71,
pause = 72,
insert = 73,
home = 74,
page_up = 75,
delete = 76,
end = 77,
page_down = 78,
right = 79,
left = 80,
down = 81,
up = 82,
numlock_clear = 83,
kpad_divide = 84,
kpad_multiply = 85,
kpad_minus = 86,
kpad_plus = 87,
kpad_enter = 88,
kpad_1 = 89,
kpad_2 = 90,
kpad_3 = 91,
kpad_4 = 92,
kpad_5 = 93,
kpad_6 = 94,
kpad_7 = 95,
kpad_8 = 96,
kpad_9 = 97,
kpad_0 = 98,
kpad_period = 99,
// NONUSBACKSLASH = 100,
// OS_Compose = 101,
// power = 102,
kpad_equals = 103,
// F13 = 104,
// F14 = 105,
// F15 = 106,
// F16 = 107,
// F17 = 108,
// F18 = 109,
// F19 = 110,
// F20 = 111,
// F21 = 112,
// F22 = 113,
// F23 = 114,
// F24 = 115,
// execute = 116,
// help = 117,
// menu = 118,
// select = 119,
// stop = 120,
// again = 121,
// undo = 122,
// cut = 123,
// copy = 124,
// paste = 125,
// find = 126,
// mute = 127,
// volume_up = 128,
// volume_down = 129,
/* LOCKINGCAPSLOCK = 130, */
/* LOCKINGNUMLOCK = 131, */
/* LOCKINGSCROLLLOCK = 132, */
// kpad_comma = 133,
// kpad_equals_AS400 = 134,
// international_1 = 135,
// international_2 = 136,
// international_3 = 137,
// international_4 = 138,
// international_5 = 139,
// international_6 = 140,
// international_7 = 141,
// international_8 = 142,
// international_9 = 143,
// lang_1 = 144,
// lang_2 = 145,
// lang_3 = 146,
// lang_4 = 147,
// lang_5 = 148,
// lang_6 = 149,
// lang_7 = 150,
// lang_8 = 151,
// lang_9 = 152,
// alt_erase = 153,
// sysreq = 154,
// cancel = 155,
// clear = 156,
// prior = 157,
// return_2 = 158,
// separator = 159,
// out = 160,
// OPER = 161,
// clear_again = 162,
// CRSEL = 163,
// EXSEL = 164,
// KP_00 = 176,
// KP_000 = 177,
// THOUSANDSSEPARATOR = 178,
// DECIMALSEPARATOR = 179,
// CURRENCYUNIT = 180,
// CURRENCYSUBUNIT = 181,
// KP_LEFTPAREN = 182,
// KP_RIGHTPAREN = 183,
// KP_LEFTBRACE = 184,
// KP_RIGHTBRACE = 185,
// KP_TAB = 186,
// KP_BACKSPACE = 187,
// KP_A = 188,
// KP_B = 189,
// KP_C = 190,
// KP_D = 191,
// KP_E = 192,
// KP_F = 193,
// KP_XOR = 194,
// KP_POWER = 195,
// KP_PERCENT = 196,
// KP_LESS = 197,
// KP_GREATER = 198,
// KP_AMPERSAND = 199,
// KP_DBLAMPERSAND = 200,
// KP_VERTICALBAR = 201,
// KP_DBLVERTICALBAR = 202,
// KP_COLON = 203,
// KP_HASH = 204,
// KP_SPACE = 205,
// KP_AT = 206,
// KP_EXCLAM = 207,
// KP_MEMSTORE = 208,
// KP_MEMRECALL = 209,
// KP_MEMCLEAR = 210,
// KP_MEMADD = 211,
// KP_MEMSUBTRACT = 212,
// KP_MEMMULTIPLY = 213,
// KP_MEMDIVIDE = 214,
// KP_PLUSMINUS = 215,
// KP_CLEAR = 216,
// KP_CLEARENTRY = 217,
// KP_BINARY = 218,
// KP_OCTAL = 219,
// KP_DECIMAL = 220,
// KP_HEXADECIMAL = 221,
left_control = 224,
left_shift = 225,
left_alt = 226,
// LGUI = 227,
right_control = 228,
right_shift = 229,
right_alt = 230,
count = 512,
}

View File

@@ -0,0 +1,168 @@
package sectr
MaxKeyboardKeys :: 512
KeyCode :: enum u32 {
null = 0x00,
ignored = 0x01,
menu = 0x02,
world_1 = 0x03,
world_2 = 0x04,
// 0x05
// 0x06
// 0x07
backspace = '\b', // 0x08
tab = '\t', // 0x09
right = 0x0A,
left = 0x0B,
down = 0x0C,
up = 0x0D,
enter = '\r', // 0x0E
// 0x0F
caps_lock = 0x10,
scroll_lock = 0x11,
num_lock = 0x12,
left_alt = 0x13,
left_shift = 0x14,
left_control = 0x15,
right_alt = 0x16,
right_shift = 0x17,
right_control = 0x18,
print_screen = 0x19,
pause = 0x1A,
escape = '\x1B', // 0x1B
home = 0x1C,
end = 0x1D,
page_up = 0x1E,
page_down = 0x1F,
space = ' ', // 0x20
exclamation = '!', // 0x21
quote_dbl = '"', // 0x22
hash = '#', // 0x23
dollar = '$', // 0x24
percent = '%', // 0x25
ampersand = '&', // 0x26
quote = '\'', // 0x27
paren_open = '(', // 0x28
paren_close = ')', // 0x29
asterisk = '*', // 0x2A
plus = '+', // 0x2B
comma = ',', // 0x2C
minus = '-', // 0x2D
period = '.', // 0x2E
slash = '/', // 0x2F
nrow_0 = '0', // 0x30
nrow_1 = '1', // 0x31
nrow_2 = '2', // 0x32
nrow_3 = '3', // 0x33
nrow_4 = '4', // 0x34
nrow_5 = '5', // 0x35
nrow_6 = '6', // 0x36
nrow_7 = '7', // 0x37
nrow_8 = '8', // 0x38
nrow_9 = '9', // 0x39
// 0x3A
semicolon = ';', // 0x3B
less = '<', // 0x3C
equals = '=', // 0x3D
greater = '>', // 0x3E
question = '?', // 0x3F
at = '@', // 0x40
A = 'A', // 0x41
B = 'B', // 0x42
C = 'C', // 0x43
D = 'D', // 0x44
E = 'E', // 0x45
F = 'F', // 0x46
G = 'G', // 0x47
H = 'H', // 0x48
I = 'I', // 0x49
J = 'J', // 0x4A
K = 'K', // 0x4B
L = 'L', // 0x4C
M = 'M', // 0x4D
N = 'N', // 0x4E
O = 'O', // 0x4F
P = 'P', // 0x50
Q = 'Q', // 0x51
R = 'R', // 0x52
S = 'S', // 0x53
T = 'T', // 0x54
U = 'U', // 0x55
V = 'V', // 0x56
W = 'W', // 0x57
X = 'X', // 0x58
Y = 'Y', // 0x59
Z = 'Z', // 0x5A
bracket_open = '[', // 0x5B
backslash = '\\', // 0x5C
bracket_close = ']', // 0x5D
caret = '^', // 0x5E
underscore = '_', // 0x5F
backtick = '`', // 0x60
kpad_0 = 0x61,
kpad_1 = 0x62,
kpad_2 = 0x63,
kpad_3 = 0x64,
kpad_4 = 0x65,
kpad_5 = 0x66,
kpad_6 = 0x67,
kpad_7 = 0x68,
kpad_8 = 0x69,
kpad_9 = 0x6A,
kpad_decimal = 0x6B,
kpad_equals = 0x6C,
kpad_plus = 0x6D,
kpad_minus = 0x6E,
kpad_multiply = 0x6F,
kpad_divide = 0x70,
kpad_enter = 0x71,
F1 = 0x72,
F2 = 0x73,
F3 = 0x74,
F4 = 0x75,
F5 = 0x76,
F6 = 0x77,
F7 = 0x78,
F8 = 0x79,
F9 = 0x7A,
F10 = 0x7B,
F11 = 0x7C,
F12 = 0x7D,
insert = 0x7E,
delete = 0x7F,
F13 = 0x80,
F14 = 0x81,
F15 = 0x82,
F16 = 0x83,
F17 = 0x84,
F18 = 0x85,
F19 = 0x86,
F20 = 0x87,
F21 = 0x88,
F22 = 0x89,
F23 = 0x8A,
F24 = 0x8B,
F25 = 0x8C,
count = 0x8D,
}

View File

@@ -28,8 +28,7 @@ f32_Min :: 0x00800000
// Note(Ed) : I don't see an intrinsict available anywhere for this. So I'll be using the Terathon non-sse impl
// Inverse Square Root
// C++ Source https://github.com/EricLengyel/Terathon-Math-Library/blob/main/TSMath.cpp#L191
inverse_sqrt_f32 :: proc "contextless" ( value: f32 ) -> f32
{
inverse_sqrt_f32 :: proc "contextless" ( value: f32 ) -> f32 {
if ( value < f32_Min) { return f32_Infinity }
value_u32 := transmute(u32) value

View File

@@ -19,6 +19,7 @@ import "core:log"
LoggerLevel :: log.Level
import "core:mem"
AllocatorError :: mem.Allocator_Error
// Used strickly for the logger
Odin_Arena :: mem.Arena
odin_arena_allocator :: mem.arena_allocator
@@ -60,14 +61,42 @@ import "core:time"
tick_now :: time.tick_now
import "codebase:grime"
Logger :: grime.Logger
logger_init :: grime.logger_init
to_odin_logger :: grime.to_odin_logger
ensure :: grime.ensure
fatal :: grime.fatal
verify :: grime.verify
Array :: grime.Array
array_to_slice :: grime.array_to_slice
array_append_array :: grime.array_append_array
array_append_slice :: grime.array_append_slice
array_append_value :: grime.array_append_value
array_back :: grime.array_back
array_clear :: grime.array_clear
// Logging
Logger :: grime.Logger
logger_init :: grime.logger_init
// Memory
mem_alloc :: grime.mem_alloc
mem_copy :: grime.mem_copy
mem_copy_non_overlapping :: grime.mem_copy_non_overlapping
mem_zero :: grime.mem_zero
slice_zero :: grime.slice_zero
// Ring Buffer
FRingBuffer :: grime.FRingBuffer
FRingBufferIterator :: grime.FRingBufferIterator
ringbuf_fixed_peak_back :: grime.ringbuf_fixed_peak_back
ringbuf_fixed_push :: grime.ringbuf_fixed_push
ringbuf_fixed_push_slice :: grime.ringbuf_fixed_push_slice
iterator_ringbuf_fixed :: grime.iterator_ringbuf_fixed
next_ringbuf_fixed_iterator :: grime.next_ringbuf_fixed_iterator
// Strings
cstr_to_str_capped :: grime.cstr_to_str_capped
to_odin_logger :: grime.to_odin_logger
// Operating System
set__scheduler_granularity :: grime.set__scheduler_granularity
grime_set_profiler_module_context :: grime.set_profiler_module_context
grime_set_profiler_thread_buffer :: grime.set_profiler_thread_buffer
// grime_set_profiler_module_context :: grime.set_profiler_module_context
// grime_set_profiler_thread_buffer :: grime.set_profiler_thread_buffer
Kilo :: 1024
Mega :: Kilo * 1024
@@ -92,24 +121,24 @@ Tera :: Giga * 1024
S_To_MS :: grime.S_To_MS
ensure :: #force_inline proc( condition : b32, msg : string, location := #caller_location ) {
if condition do return
log_print( msg, LoggerLevel.Warning, location )
debug_trap()
}
// TODO(Ed) : Setup exit codes!
fatal :: #force_inline proc( msg : string, exit_code : int = -1, location := #caller_location ) {
log_print( msg, LoggerLevel.Fatal, location )
debug_trap()
process_exit( exit_code )
}
// TODO(Ed) : Setup exit codes!
verify :: #force_inline proc( condition : b32, msg : string, exit_code : int = -1, location := #caller_location ) {
if condition do return
log_print( msg, LoggerLevel.Fatal, location )
debug_trap()
process_exit( exit_code )
}
// ensure :: #force_inline proc( condition : b32, msg : string, location := #caller_location ) {
// if condition do return
// log_print( msg, LoggerLevel.Warning, location )
// debug_trap()
// }
// // TODO(Ed) : Setup exit codes!
// fatal :: #force_inline proc( msg : string, exit_code : int = -1, location := #caller_location ) {
// log_print( msg, LoggerLevel.Fatal, location )
// debug_trap()
// process_exit( exit_code )
// }
// // TODO(Ed) : Setup exit codes!
// verify :: #force_inline proc( condition : b32, msg : string, exit_code : int = -1, location := #caller_location ) {
// if condition do return
// log_print( msg, LoggerLevel.Fatal, location )
// debug_trap()
// process_exit( exit_code )
// }
log_print :: proc( msg : string, level := LoggerLevel.Info, loc := #caller_location ) {
context.allocator = odin_arena_allocator(& memory.host_scratch)
@@ -141,13 +170,24 @@ add :: proc {
add_r2f4,
add_biv3f4,
}
append :: proc {
array_append_array,
array_append_slice,
array_append_value,
}
array_append :: proc {
array_append_array,
array_append_slice,
array_append_value,
}
biv3f4 :: proc {
biv3f4_via_f32s,
v3f4_to_biv3f4,
}
bivec :: biv3f4
clear :: proc {
array_clear,
}
cross :: proc {
cross_s,
cross_v2,
@@ -156,11 +196,9 @@ cross :: proc {
cross_v3f4_uv3f4,
cross_u3f4_v3f4,
}
div :: proc {
div_biv3f4_f32,
}
dot :: proc {
sdot,
vdot,
@@ -171,75 +209,76 @@ dot :: proc {
dot_v3f4_uv3f4,
dot_uv3f4_v3f4,
}
equal :: proc {
equal_r2f4,
}
is_power_of_two :: proc {
is_power_of_two_u32,
// is_power_of_two_uintptr,
}
iterator :: proc {
iterator_ringbuf_fixed,
}
mov_avg_exp :: proc {
mov_avg_exp_f32,
mov_avg_exp_f64,
}
mul :: proc {
mul_biv3f4,
mul_biv3f4_f32,
mul_f32_biv3f4,
}
join :: proc {
join_r2f4,
}
inverse_sqrt :: proc {
inverse_sqrt_f32,
}
next :: proc {
next_ringbuf_fixed_iterator,
}
point3 :: proc {
v3f4_to_point3f4,
}
pow2 :: proc {
pow2_v3f4,
}
peek_back :: proc {
ringbuf_fixed_peak_back,
}
push :: proc {
ringbuf_fixed_push,
ringbuf_fixed_push_slice,
}
quatf4 :: proc {
quatf4_from_rotor3f4,
}
regress :: proc {
regress_biv3f4,
}
rotor3 :: proc {
rotor3f4_via_comps_f4,
rotor3f4_via_bv_s_f4,
// rotor3f4_via_from_to_v3f4,
}
size :: proc {
size_r2f4,
}
sub :: proc {
sub_r2f4,
sub_biv3f4,
// join_point3_f4,
// join_pointflat3_f4,
}
to_slice :: proc {
array_to_slice,
}
v2f4 :: proc {
v2f4_from_f32s,
v2f4_from_scalar,
v2f4_from_v2s4,
v2s4_from_v2f4,
}
v3f4 :: proc {
v3f4_via_f32s,
biv3f4_to_v3f4,
@@ -247,14 +286,12 @@ v3f4 :: proc {
pointflat3f4_to_v3f4,
uv3f4_to_v3f4,
}
v2 :: proc {
v2f4_from_f32s,
v2f4_from_scalar,
v2f4_from_v2s4,
v2s4_from_v2f4,
}
v3 :: proc {
v3f4_via_f32s,
biv3f4_to_v3f4,
@@ -262,12 +299,14 @@ v3 :: proc {
pointflat3f4_to_v3f4,
uv3f4_to_v3f4,
}
v4 :: proc {
uv4f4_to_v4f4,
}
wedge :: proc {
wedge_v3f4,
wedge_biv3f4,
}
zero :: proc {
mem_zero,
slice_zero,
}

View File

@@ -24,12 +24,35 @@ when ODIN_OS == .Windows {
// 1 inch = 2.54 cm, 96 inch * 2.54 = 243.84 DPCM
}
//region Unit Conversion Impl
// cm_to_points :: proc( cm : f32 ) -> f32 {
// }
// points_to_cm :: proc( points : f32 ) -> f32 {
// screen_dpc := get_state().app_window.dpc
// cm_per_pixel := 1.0 / screen_dpc
// pixels := points * DPT_DPC * cm_per_pixel
// return points *
// }
f32_cm_to_pixels :: #force_inline proc "contextless"(cm, screen_ppcm: f32) -> f32 { return cm * screen_ppcm }
f32_pixels_to_cm :: #force_inline proc "contextless"(pixels, screen_ppcm: f32) -> f32 { return pixels * (1.0 / screen_ppcm) }
f32_points_to_pixels :: #force_inline proc "contextless"(points, screen_ppcm: f32) -> f32 { return points * DPT_PPCM * (1.0 / screen_ppcm) }
f32_pixels_to_points :: #force_inline proc "contextless"(pixels, screen_ppcm: f32) -> f32 { return pixels * (1.0 / screen_ppcm) * Points_Per_CM }
v2f4_cm_to_pixels :: #force_inline proc "contextless"(v: V2_F4, screen_ppcm: f32) -> V2_F4 { return v * screen_ppcm }
v2f4_pixels_to_cm :: #force_inline proc "contextless"(v: V2_F4, screen_ppcm: f32) -> V2_F4 { return v * (1.0 / screen_ppcm) }
v2f4_points_to_pixels :: #force_inline proc "contextless"(vpoints: V2_F4, screen_ppcm: f32) -> V2_F4 { return vpoints * DPT_PPCM * (1.0 / screen_ppcm) }
r2f4_cm_to_pixels :: #force_inline proc "contextless"(range: R2_F4, screen_ppcm: f32) -> R2_F4 { return R2_F4 { range.p0 * screen_ppcm, range.p1 * screen_ppcm } }
range2_pixels_to_cm :: #force_inline proc "contextless"(range: R2_F4, screen_ppcm: f32) -> R2_F4 { cm_per_pixel := 1.0 / screen_ppcm; return R2_F4 { range.p0 * cm_per_pixel, range.p1 * cm_per_pixel } }
// vec2_points_to_cm :: proc( vpoints : Vec2 ) -> Vec2 {
// }
//endregion Unit Conversion Impl
AreaSize :: V2_F4
Bounds2 :: struct {
top_left, bottom_right: V2_F4,
}
BoundsCorners2 :: struct {
top_left, top_right, bottom_left, bottom_right: V2_F4,
}
@@ -57,3 +80,66 @@ CameraZoomMode :: enum u32 {
Extents2_F4 :: V2_F4
Extents2_S4 :: V2_S4
bounds2_radius :: #force_inline proc "contextless" (bounds: Bounds2) -> f32 { return max( bounds.bottom_right.x, bounds.top_left.y ) }
extent_from_size :: #force_inline proc "contextless" (size: AreaSize) -> Extents2_F4 { return transmute(Extents2_F4) (size * 2.0) }
screen_size :: #force_inline proc "contextless" (screen_extent: Extents2_F4) -> AreaSize { return transmute(AreaSize) (screen_extent * 2.0) }
screen_get_bounds :: #force_inline proc "contextless" (screen_extent: Extents2_F4) -> R2_F4 { return R2_F4 { { -screen_extent.x, -screen_extent.y} /*bottom_left*/, { screen_extent.x, screen_extent.y} /*top_right*/ } }
screen_get_corners :: #force_inline proc "contextless"(screen_extent: Extents2_F4) -> BoundsCorners2 { return {
top_left = { -screen_extent.x, screen_extent.y },
top_right = { screen_extent.x, screen_extent.y },
bottom_left = { -screen_extent.x, -screen_extent.y },
bottom_right = { screen_extent.x, -screen_extent.y },
}}
view_get_bounds :: #force_inline proc "contextless"(cam: Camera, screen_extent: Extents2_F4) -> R2_F4 {
cam_zoom_ratio := 1.0 / cam.zoom
bottom_left := V2_F4 { -screen_extent.x, -screen_extent.y}
top_right := V2_F4 { screen_extent.x, screen_extent.y}
bottom_left = screen_to_ws_view_pos(bottom_left, cam.position, cam.zoom)
top_right = screen_to_ws_view_pos(top_right, cam.position, cam.zoom)
return R2_F4{bottom_left, top_right}
}
view_get_corners :: #force_inline proc "contextless"(cam: Camera, screen_extent: Extents2_F4) -> BoundsCorners2 {
cam_zoom_ratio := 1.0 / cam.zoom
zoomed_extent := screen_extent * cam_zoom_ratio
top_left := cam.position + V2_F4 { -zoomed_extent.x, zoomed_extent.y }
top_right := cam.position + V2_F4 { zoomed_extent.x, zoomed_extent.y }
bottom_left := cam.position + V2_F4 { -zoomed_extent.x, -zoomed_extent.y }
bottom_right := cam.position + V2_F4 { zoomed_extent.x, -zoomed_extent.y }
return { top_left, top_right, bottom_left, bottom_right }
}
render_to_screen_pos :: #force_inline proc "contextless" (pos: V2_F4, screen_extent: Extents2_F4) -> V2_F4 { return V2_F4 { pos.x - screen_extent.x, (pos.y * -1) + screen_extent.y } }
render_to_ws_view_pos :: #force_inline proc "contextless" (pos: V2_F4) -> V2_F4 { return {} } //TODO(Ed): Implement?
screen_to_ws_view_pos :: #force_inline proc "contextless" (pos: V2_F4, cam_pos: V2_F4, cam_zoom: f32, ) -> V2_F4 { return pos * (/*Camera Zoom Ratio*/1.0 / cam_zoom) - cam_pos } // TODO(Ed): Doesn't take into account view extent.
screen_to_render_pos :: #force_inline proc "contextless" (pos: V2_F4, screen_extent: Extents2_F4) -> V2_F4 { return pos + screen_extent } // Centered screen space to conventional screen space used for rendering
// TODO(Ed): These should assume a cam_context or have the ability to provide it in params
ws_view_extent :: #force_inline proc "contextless" (cam_view: Extents2_F4, cam_zoom: f32) -> Extents2_F4 { return cam_view * (/*Camera Zoom Ratio*/1.0 / cam_zoom) }
ws_view_to_screen_pos :: #force_inline proc "contextless" (ws_pos : V2_F4, cam: Camera) -> V2_F4 {
// Apply camera transformation
view_pos := (ws_pos - cam.position) * cam.zoom
// TODO(Ed): properly take into account cam.view
screen_pos := view_pos
return screen_pos
}
ws_view_to_render_pos :: #force_inline proc "contextless"(position: V2_F4, cam: Camera, screen_extent: Extents2_F4) -> V2_F4 {
extent_offset: V2_F4 = { screen_extent.x, screen_extent.y } * { 1, 1 }
position := V2_F4 { position.x, position.y }
cam_offset := V2_F4 { cam.position.x, cam.position.y }
return extent_offset + (position + cam_offset) * cam.zoom
}
// Workspace view to screen space position (zoom agnostic)
// TODO(Ed): Support a position which would not be centered on the screen if in a viewport
ws_view_to_screen_pos_no_zoom :: #force_inline proc "contextless"(position: V2_F4, cam: Camera) -> V2_F4 {
cam_zoom_ratio := 1.0 / cam.zoom
return { position.x, position.y } * cam_zoom_ratio
}
// Workspace view to render space position (zoom agnostic)
// TODO(Ed): Support a position which would not be centered on the screen if in a viewport
ws_view_to_render_pos_no_zoom :: #force_inline proc "contextless"(position: V2_F4, cam: Camera) -> V2_F4 {
cam_zoom_ratio := 1.0 / cam.zoom
return { position.x, position.y } * cam_zoom_ratio
}

View File

@@ -2,8 +2,8 @@ package sectr
//region STATIC MEMORY
// This should be the only global on client module side.
memory: ^ProcessMemory
@(thread_local) thread: ^ThreadMemory
@(private) memory: ^ProcessMemory
@(private, thread_local) thread: ^ThreadMemory
//endregion STATIC MEMORy
MemoryConfig :: struct {
@@ -70,16 +70,28 @@ FrameTime :: struct {
}
State :: struct {
sokol_frame_count: i64,
sokol_context: Context,
config: AppConfig,
app_window: AppWindow,
logger: Logger,
// Overall frametime of the tick frame (currently main thread's)
using frametime : FrameTime,
logger: Logger,
sokol_frame_count: i64,
sokol_context: Context,
input_data : [2]InputState,
input_prev : ^InputState,
input : ^InputState, // TODO(Ed): Rename to indicate its the device's signal state for the frame?
input_events: InputEvents,
input_binds_stack: Array(InputContext),
// Note(Ed): Do not modify directly, use its interface in app/event.odin
staged_input_events : Array(InputEvent),
// TODO(Ed): Add a multi-threaded guard for accessing or mutating staged_input_events.
}
ThreadState :: struct {
@@ -96,3 +108,7 @@ ThreadState :: struct {
app_config :: #force_inline proc "contextless" () -> AppConfig { return memory.client_memory.config }
get_frametime :: #force_inline proc "contextless" () -> FrameTime { return memory.client_memory.frametime }
// get_state :: #force_inline proc "contextless" () -> ^State { return memory.client_memory }
get_input_binds :: #force_inline proc "contextless" () -> InputContext { return array_back (memory.client_memory.input_binds_stack) }
get_input_binds_stack :: #force_inline proc "contextless" () -> []InputContext { return array_to_slice(memory.client_memory.input_binds_stack) }

View File

@@ -97,6 +97,7 @@ $flag_radlink = '-radlink'
$flag_sanitize_address = '-sanitize:address'
$flag_sanitize_memory = '-sanitize:memory'
$flag_sanitize_thread = '-sanitize:thread'
$flag_show_definables = '-show-defineables'
$flag_subsystem = '-subsystem:'
$flag_show_debug_messages = '-show-debug-messages'
$flag_show_timings = '-show-timings'
@@ -215,8 +216,8 @@ push-location $path_root
$build_args += $flag_microarch_zen5
$build_args += $flag_use_separate_modules
$build_args += $flag_thread_count + $CoreCount_Physical
$build_args += $flag_optimize_none
# $build_args += $flag_optimize_minimal
# $build_args += $flag_optimize_none
$build_args += $flag_optimize_minimal
# $build_args += $flag_optimize_speed
# $build_args += $falg_optimize_aggressive
$build_args += $flag_debug
@@ -233,13 +234,14 @@ push-location $path_root
# $build_args += $flag_sanitize_address
# $build_args += $flag_sanitize_memory
# $build_args += $flag_show_debug_messages
$build_args += $flag_show_definabless
$build_args += $flag_show_timings
# $build_args += $flag_build_diagnostics
# TODO(Ed): Enforce nil default allocator
foreach ($arg in $build_args) {
write-host `t $arg -ForegroundColor Cyan
}
# foreach ($arg in $build_args) {
# write-host `t $arg -ForegroundColor Cyan
# }
if ( Test-Path $module_dll) {
$module_dll_pre_build_hash = get-filehash -path $module_dll -Algorithm MD5
@@ -301,8 +303,8 @@ push-location $path_root
# $build_args += $flag_micro_architecture_native
$build_args += $flag_microarch_zen5
$build_args += $flag_thread_count + $CoreCount_Physical
$build_args += $flag_optimize_none
# $build_args += $flag_optimize_minimal
# $build_args += $flag_optimize_none
$build_args += $flag_optimize_minimal
# $build_args += $flag_optimize_speed
# $build_args += $falg_optimize_aggressive
$build_args += $flag_debug
@@ -318,11 +320,12 @@ push-location $path_root
# $build_args += $flag_sanitize_address
# $build_args += $flag_sanitize_memory
# $build_args += $flag_build_diagnostics
$build_args += $flag_show_definabless
# TODO(Ed): Enforce nil default allocator
foreach ($arg in $build_args) {
write-host `t $arg -ForegroundColor Cyan
}
# foreach ($arg in $build_args) {
# write-host `t $arg -ForegroundColor Cyan
# }
if ( Test-Path $executable) {
$executable_pre_build_hash = get-filehash -path $executable -Algorithm MD5

View File

@@ -12,6 +12,8 @@ $url_odin_repo = 'https://github.com/Ed94/Odin.git'
$url_sokol = 'https://github.com/Ed94/sokol-odin.git'
$url_sokol_tools = 'https://github.com/floooh/sokol-tools-bin.git'
# TODO(Ed): https://github.com/karl-zylinski/odin-handle-map
$path_harfbuzz = join-path $path_thirdparty 'harfbuzz'
$path_ini_parser = join-path $path_thirdparty 'ini'
$path_odin = join-path $path_toolchain 'Odin'