Compare commits

...

31 Commits

Author SHA1 Message Date
Ed_
aef68be7e2 Misc adjustments/fixes to grime from a few nights ago 2025-11-08 10:36:07 -05:00
Ed_
32b69d50cb adjustments 2025-11-07 00:41:44 -05:00
Ed_
e632bc4c78 working on grime a bit 2025-11-07 00:35:19 -05:00
Ed_
a0ddc3c26e minor misc (end of day stuff) 2025-10-21 23:21:07 -04:00
Ed_
2303866c81 code2/grime progress 2025-10-21 22:57:23 -04:00
Ed_
96c6d58ea0 Progress on code2/grime allocators 2025-10-21 22:10:48 -04:00
Ed_
f63b52f910 curate fixed stack 2025-10-21 22:10:23 -04:00
Ed_
6d5215ac1e Make ensures/verifies in Array asserts 2025-10-21 22:08:29 -04:00
Ed_
1e18592ff5 thinking about key tables... 2025-10-21 22:07:55 -04:00
Ed_
43141183a6 wip messing around with adding jai flavored hash/key table. 2025-10-20 12:51:29 -04:00
Ed_
0607d81f70 ignore .idea 2025-10-18 20:47:49 -04:00
Ed_
58ba273dd1 code2: initial curation of virtual arena 2025-10-18 20:46:06 -04:00
Ed_
0f621b4e1b Started to curate/move over input stuff 2025-10-18 15:01:30 -04:00
Ed_
62979b480e Code2 Progress: more sokol stuff 2025-10-18 15:01:19 -04:00
Ed_
5a3b8ef3b9 WIP(untested, compiles): Started to setup sokol callbacks 2025-10-17 00:58:39 -04:00
Ed_
b46c790756 WIP(Untested, compiles): Grime progress 2025-10-16 20:21:44 -04:00
Ed_
b4f0806d1b WIP: More progress on setting grime back up. 2025-10-16 14:15:26 -04:00
Ed_
3958fac3e0 reduced WorkerID to fit a 128 bit mask 2025-10-15 23:43:03 -04:00
Ed_
724b3eeba5 More edge case testing on the multi-threading, preppared to start moving heavy code back 2025-10-15 21:35:45 -04:00
Ed_
bc742b2116 basic frametime is back 2025-10-15 19:43:02 -04:00
Ed_
fa25081d63 WIP: Getting some of the math sorted out and setting up tick_frametime 2025-10-15 17:21:37 -04:00
Ed_
a0f51913dc initial job queue load test during exit, works with hot-reload. 2025-10-15 01:59:19 -04:00
Ed_
9f75d080a7 hot reload works with tick lanes and job worker loops! 2025-10-15 00:44:14 -04:00
Ed_
ed6a79fd78 job workers ticking (hot-reload untested) 2025-10-14 00:31:33 -04:00
Ed_
c106d3bc96 WIP: tick lanes were working, currently bootstrapping the job system. 2025-10-14 00:04:30 -04:00
Ed_
0d904fba7c WIP: Untested more process runtime bootstrapping, some decisions on how grime is setup.. 2025-10-13 12:47:16 -04:00
Ed_
4abd2401f0 Naming convention change for atomics
cache_coherent_ is what I'm going with for now based off of studying it further.

I really really don't like the "atomic" as the verbiage phrase. It conveys nothing about what the execution engine is actually doing with the thread caches or the bus snoop.
2025-10-13 02:49:07 -04:00
Ed_
5f57cea027 got multi-laned hot-reload 2025-10-13 02:13:58 -04:00
Ed_
8ced7cc71e progress on setting up host/client api process execution 2025-10-12 19:52:17 -04:00
Ed_
406ff97968 progress on setting up host/client api process execution 2025-10-12 16:20:08 -04:00
Ed_
866432723e progress on grime 2025-10-12 16:19:26 -04:00
58 changed files with 5158 additions and 1049 deletions

21
.gitignore vendored
View File

@@ -8,13 +8,21 @@ build/**
# folders
assets/TX-02-1WN9N6Q8
thirdparty/backtrace
thirdparty/harfbuzz
thirdparty/ini
thirdparty/sokol
thirdparty/sokol-tools
thirdparty/harfbuzz/*
!thirdparty/harfbuzz/harfbuzz.odin
thirdparty/ini/*
thirdparty/sokol/*
!thirdparty/sokol/app/
!thirdparty/sokol/gfx/
!thirdparty/sokol/gp/
thirdparty/sokol-tools/*
thirdparty/stb/*
!thirdparty/stb/truetype/stb_truetype.odin
toolchain/**
toolchain/Odin/*
!toolchain/Odin/base
!toolchain/Odin/core
!toolchain/Odin/vendor
# logs
logs
@@ -27,3 +35,4 @@ ols.json
*.spall
sectr.user
sectr.proj
.idea

View File

@@ -2,7 +2,10 @@
This prototype aims to flesh out ideas I've wanted to explore futher on code editing & related tooling.
The things to explore:
Current goal with the prototype is just making a good visualizer & note aggregation for codebases & libraries.
My note repos with affine links give an idea of what that would look like.
The things to explore (future):
* 2D canvas for laying out code visualized in various types of ASTs
* WYSIWYG frontend ASTs
@@ -28,55 +31,14 @@ The dependencies are:
* [sokol-odin (Sectr Fork)](https://github.com/Ed94/sokol-odin)
* [sokol-tools](https://github.com/floooh/sokol-tools)
* Powershell (if you want to use my build scripts)
* backtrace (not used yet)
* freetype (not used yet)
* Eventually some config parser (maybe I'll use metadesk, or [ini](https://github.com/laytan/odin-ini-parser))
The project is so far in a "codebase boostrapping" phase. Most the work being done right now is setting up high performance linear zoom rendering for text and UI.
Text has recently hit sufficient peformance targets, and now inital UX has become the focus.
The project's is organized into 2 runtime modules sectr_host & sectr.
The host module loads the main module & its memory. Hot-reloading it's dll when it detects a change.
Codebase organization:
* App: General app config, state, and operations.
* Engine: client interface for host, tick, update, rendering.
* Has the following definitions: startup, shutdown, reload, tick, clean_frame (which host hooks up to when managing the client dll)
* Will handle async ops.
* Font Provider: Manages fonts.
* Bulk of implementation maintained as a separate library: [VEFontCache-Odin](https://github.com/Ed94/VEFontCache-Odin)
* Grime: Name speaks for itself, stuff not directly related to the target features to iterate upon for the prototype.
* Defining dependency aliases or procedure overload tables, rolling own allocator, data structures, etc.
* Input: All human input related features
* Base input features (polling & related) are platform abstracted from sokol_app
* Entirely user rebindable
* Math: The usual for 2D/3D.
* Parsers:
* AST generation, editing, and serialization.
* Parsers for different levels of "synatitic & semantic awareness", Formatting -> Domain Specific AST
* Figure out pragmatic transformations between ASTs.
* Project: Encpasulation of user config/context/state separate from persistent app's
* Manages the codebase (database & model view controller)
* Manages workspaces : View compositions of the codebase
* UI: Core graphic user interface framework, AST visualzation & editing, backend visualization
* PIMGUI (Persistent Immediate Mode User Interface)
* Auto-layout
* Supports heavy procedural generation of box widgets
* Viewports
* Docking/Tiling, Floating, Canvas
Due to the nature of the prototype there are 'sub-groups' such as the codebase being its own ordeal as well as the workspace.
They'll be elaborated in their own documentation
## Gallery
![img](docs/assets/sectr_host_2024-03-09_04-30-27.png)
![img](docs/assets/sectr_host_2024-05-04_12-29-39.png)
![img](docs/assets/Code_2024-05-04_12-55-53.png)
![img](docs/assets/sectr_host_2024-05-11_22-34-15.png)
![img](docs/assets/sectr_host_2024-05-15_03-32-36.png)
![img](docs/assets/Code_2024-05-21_23-15-16.gif)
## Notes

View File

@@ -2,6 +2,5 @@
This is a top-level package to adjust odin to my personalized usage.
I curate all usage of odin's provided package definitons through here. The client and host packages should never directly import them.
There is only one definition with static allocations in Grime. Ideally there are also none, but spall profiler needs a context.
There are no implicit static allocations in Grime. Ideally there are also none from the base/core packages but some probably leak.

View File

@@ -1,10 +1,31 @@
package grime
/*
This is an non-ideomatic allocator interface inspired Odin/Jai/gb/zpl-c.
This is an non-ideomatic allocator interface inspired by Odin/Jai/gb/zpl-c.
By the default the interface is still compatible Odin's context system however the user is expected to wrap the allocator struct with odin_ainfo_wrap to ideomatic procedures.
For details see: Ideomatic Compatability Wrapper (just search it)
For debug builds, we do not directly do calls to a procedure in the codebase's code paths, instead we pass a proc id that we resolve on interface calls.
This allows for hot-reload without needing to patch persistent allocator references.
To support what ideomatic odin expects in their respective codepaths, all of of sectr's codebase package mappings will wrap procedures in the formaat:
alias_symbol :: #force_inline proc ... (..., allocator := context.allocator) { return thidparty_symbol(..., allocator = resolve_odin_allocator(allocator)) }
- or
alias_symbol :: #force_inline proc ... (..., allocator := context.temp_allocator) { return thidparty_symbol(..., allocator = resolve_odin_allocator(allocator)) }
- or
alias_symbol :: #force_inline proc ... (...) {
context.allocator = resolve_odin_allocator(context.allocator)
context.temp_allocator = resolve_odin_allocator(context.temp_allocator)
return thidparty_symbol(..., allocator = resolve_odin_allocator(allocator)) }
}
resolve_odin_allocator: Will procedue an Allocator struct with the procedure mapping resolved.
resolve_allocator_proc: Used for the personalized interface to resolve the mapping right on call.
It "problably" is possible to extend the original allocator interface without modifying the original source so that the distinction between the codebase's
generic allocator interface at least converges to use the same proc signature. However since the package mapping symbols are already needing the resolve call
for the patchless hot-reload, the cost is just two procs for each interface.
*/
AllocatorOp :: enum u32 {
@@ -18,6 +39,11 @@ AllocatorOp :: enum u32 {
Rewind,
SavePoint,
Query, // Must always be implemented
Is_Owner,
Startup,
Shutdown,
Thread_Start,
Thread_Stop,
}
AllocatorQueryFlag :: enum u64 {
Alloc,
@@ -26,32 +52,34 @@ AllocatorQueryFlag :: enum u64 {
Shrink,
Grow,
Resize, // Supports both grow and shrink
Rewind, // Ability to rewind to a save point (ex: arenas, stack), must also be able to save such a point
// Actually_Resize,
// Is_This_Yours,
Actually_Resize,
Multiple_Threads,
Is_Owner,
Hint_Fast_Bump,
Hint_General_Heap,
Hint_Per_Frame_Temporary,
Hint_Debug_Support,
}
AllocatorError :: Odin_AllocatorError
// AllocatorError :: enum i32 {
// None = 0,
// Out_Of_Memory = 1,
// Invalid_Pointer = 2,
// Invalid_Argument = 3,
// Mode_Not_Implemented = 4,
// }
AllocatorQueryFlags :: bit_set[AllocatorQueryFlag; u64]
// AllocatorError :: Odin_AllocatorError
AllocatorError :: enum byte {
None = 0,
Out_Of_Memory = 1,
Invalid_Pointer = 2,
Invalid_Argument = 3,
Mode_Not_Implemented = 4,
Owner = 5,
}
AllocatorSP :: struct {
type_sig: AllocatorProc,
slot: int,
}
AllocatorProc :: #type proc (input: AllocatorProc_In, out: ^AllocatorProc_Out)
AllocatorProc :: #type proc(input: AllocatorProc_In, out: ^AllocatorProc_Out)
AllocatorProc_In :: struct {
data: rawptr,
requested_size: int,
@@ -61,6 +89,7 @@ AllocatorProc_In :: struct {
save_point : AllocatorSP,
},
op: AllocatorOp,
loc: SourceCodeLocation,
}
AllocatorProc_Out :: struct {
using _ : struct #raw_union {
@@ -86,7 +115,7 @@ AllocatorInfo :: struct {
procedure: AllocatorProc,
proc_id: AllocatorProcID,
},
data: rawptr,
data: rawptr,
}
// #assert(size_of(AllocatorQueryInfo) == size_of(AllocatorProc_Out))
@@ -94,149 +123,52 @@ AllocatorInfo :: struct {
AllocatorProcID :: enum uintptr {
FArena,
VArena,
CArena,
Pool,
Slab,
Odin_Arena,
Arena,
// Pool,
// Slab,
// Odin_Arena,
// Odin_VArena,
}
resolve_allocator_proc :: #force_inline proc(procedure: $AllocatorProcType) -> AllocatorProc {
resolve_allocator_proc :: #force_inline proc "contextless" (procedure: $AllocatorProcType) -> AllocatorProc {
when ODIN_DEBUG {
switch (transmute(AllocatorProcID)procedure) {
case .FArena: return farena_allocator_proc
case .VArena: return nil // varena_allocaotr_proc
case .CArena: return nil // carena_allocator_proc
case .Pool: return nil // pool_allocator_proc
case .Slab: return nil // slab_allocator_proc
case .Odin_Arena: return nil // odin_arena_allocator_proc
case .VArena: return varena_allocator_proc
case .Arena: return arena_allocator_proc
// case .Pool: return pool_allocator_proc
// case .Slab: return slab_allocator_proc
// case .Odin_Arena: return odin_arena_allocator_proc
// case .Odin_VArena: return odin_varena_allocator_proc
}
}
else {
return transmute(AllocatorProc) procedure
}
return nil
panic_contextless("Unresolvable procedure")
}
MEMORY_ALIGNMENT_DEFAULT :: 2 * size_of(rawptr)
ainfo :: #force_inline proc(ainfo := context.allocator) -> AllocatorInfo { return transmute(AllocatorInfo) ainfo }
odin_allocator :: #force_inline proc(ainfo: AllocatorInfo) -> Odin_Allocator { return transmute(Odin_Allocator) ainfo }
allocator_query :: proc(ainfo := context.allocator) -> AllocatorQueryInfo {
assert(ainfo.procedure != nil)
out: AllocatorQueryInfo; resolve_allocator_proc(ainfo.procedure)({data = ainfo.data, op = .Query}, transmute(^AllocatorProc_Out) & out)
return out
}
mem_free_ainfo :: proc(mem: []byte, ainfo: AllocatorInfo) {
assert(ainfo.procedure != nil)
resolve_allocator_proc(ainfo.procedure)({data = ainfo.data, op = .Free, old_allocation = mem}, & {})
}
mem_reset :: proc(ainfo := context.allocator) {
assert(ainfo.procedure != nil)
resolve_allocator_proc(ainfo.procedure)({data = ainfo.data, op = .Reset}, &{})
}
mem_rewind :: proc(ainfo := context.allocator, save_point: AllocatorSP) {
assert(ainfo.procedure != nil)
resolve_allocator_proc(ainfo.procedure)({data = ainfo.data, op = .Rewind, save_point = save_point}, & {})
}
mem_save_point :: proc(ainfo := context.allocator) -> AllocatorSP {
assert(ainfo.procedure != nil)
out: AllocatorProc_Out
resolve_allocator_proc(ainfo.procedure)({data = ainfo.data, op = .SavePoint}, & out)
return out.save_point
}
mem_alloc :: proc(size: int, alignment: int = MEMORY_ALIGNMENT_DEFAULT, no_zero: b32 = false, ainfo : $Type = context.allocator) -> []byte {
assert(ainfo.procedure != nil)
input := AllocatorProc_In {
data = ainfo.data,
op = no_zero ? .Alloc_NoZero : .Alloc,
requested_size = size,
alignment = alignment,
resolve_odin_allocator :: #force_inline proc "contextless" (allocator: Odin_Allocator) -> Odin_Allocator {
when ODIN_DEBUG {
switch (transmute(AllocatorProcID)allocator.procedure) {
case .FArena: return { farena_odin_allocator_proc, allocator.data }
case .VArena: return { varena_odin_allocator_proc, allocator.data }
case .Arena: return { arena_odin_allocator_proc, allocator.data }
// case .Pool: return nil // pool_allocator_proc
// case .Slab: return nil // slab_allocator_proc
// case .Odin_Arena: return nil // odin_arena_allocator_proc
// case .Odin_VArena: return odin_varena_allocator_proc
}
}
output: AllocatorProc_Out
resolve_allocator_proc(ainfo.procedure)(input, & output)
return output.allocation
}
mem_grow :: proc(mem: []byte, size: int, alignment: int = MEMORY_ALIGNMENT_DEFAULT, no_zero: b32 = false, ainfo := context.allocator) -> []byte {
assert(ainfo.procedure != nil)
input := AllocatorProc_In {
data = ainfo.data,
op = no_zero ? .Grow_NoZero : .Grow,
requested_size = size,
alignment = alignment,
old_allocation = mem,
else {
switch (allocator.procedure) {
case farena_allocator_proc: return { farena_odin_allocator_proc, allocator.data }
case varena_allocator_proc: return { varena_odin_allocator_proc, allocator.data }
case arena_allocator_proc: return { arena_odin_allocator_proc, allocator.data }
}
}
output: AllocatorProc_Out
resolve_allocator_proc(ainfo.procedure)(input, & output)
return output.allocation
panic_contextless("Unresolvable procedure")
}
mem_resize :: proc(mem: []byte, size: int, alignment: int = MEMORY_ALIGNMENT_DEFAULT, no_zero: b32 = false, ainfo := context.allocator) -> []byte {
assert(ainfo.procedure != nil)
input := AllocatorProc_In {
data = ainfo.data,
op = len(mem) < size ? .Shrink : no_zero ? .Grow_NoZero : .Grow,
requested_size = size,
alignment = alignment,
old_allocation = mem,
}
output: AllocatorProc_Out
resolve_allocator_proc(ainfo.procedure)(input, & output)
return output.allocation
}
mem_shrink :: proc(mem: []byte, size: int, alignment: int = MEMORY_ALIGNMENT_DEFAULT, no_zero: b32 = false, ainfo := context.allocator) -> []byte {
assert(ainfo.procedure != nil)
input := AllocatorProc_In {
data = ainfo.data,
op = .Shrink,
requested_size = size,
alignment = alignment,
old_allocation = mem,
}
output: AllocatorProc_Out
resolve_allocator_proc(ainfo.procedure)(input, & output)
return output.allocation
}
alloc_type :: proc($Type: typeid, alignment: int = MEMORY_ALIGNMENT_DEFAULT, no_zero: b32 = false, ainfo := context.allocator) -> ^Type {
assert(ainfo.procedure != nil)
input := AllocatorProc_In {
data = ainfo.data,
op = no_zero ? .Alloc_NoZero : .Alloc,
requested_size = size_of(Type),
alignment = alignment,
}
output: AllocatorProc_Out
resolve_allocator_proc(ainfo.procedure)(input, & output)
return transmute(^Type) raw_data(output.allocation)
}
alloc_slice :: proc($SliceType: typeid / []$Type, num : int, alignment: int = MEMORY_ALIGNMENT_DEFAULT, no_zero: b32 = false, ainfo := context.allocator) -> []Type {
assert(ainfo.procedure != nil)
input := AllocatorProc_In {
data = ainfo.data,
op = no_zero ? .Alloc_NoZero : .Alloc,
requested_size = size_of(Type) * num,
alignment = alignment,
}
output: AllocatorProc_Out
resolve_allocator_proc(ainfo.procedure)(input, & output)
return transmute([]Type) slice(raw_data(output.allocation), num)
}
/*
Idiomatic Compatability Wrapper
Ideally we wrap all procedures that go to ideomatic odin with the following pattern:
Usually we do the following:
```
import "core:dynlib"
os_lib_load :: dynlib.load_library
```
Instead:
os_lib_load :: #force_inline proc "contextless" (... same signature as load_library, allocator := ...) { return dynlib.load_library(..., odin_ainfo_wrap(allocator)) }
*/
odin_allocator_mode_to_allocator_op :: #force_inline proc "contextless" (mode: Odin_AllocatorMode, size_diff : int) -> AllocatorOp {
switch mode {
@@ -252,45 +184,104 @@ odin_allocator_mode_to_allocator_op :: #force_inline proc "contextless" (mode: O
panic_contextless("Impossible path")
}
odin_allocator_wrap_proc :: proc(
allocator_data : rawptr,
mode : Odin_AllocatorMode,
size : int,
alignment : int,
old_memory : rawptr,
old_size : int,
loc := #caller_location
) -> ( data : []byte, alloc_error : Odin_AllocatorError)
{
allocatorinfo :: #force_inline proc(ainfo := context.allocator) -> AllocatorInfo { return transmute(AllocatorInfo) ainfo }
allocator :: #force_inline proc(ainfo: AllocatorInfo) -> Odin_Allocator { return transmute(Odin_Allocator) ainfo }
allocator_query :: proc(ainfo := context.allocator, loc := #caller_location) -> AllocatorQueryInfo {
assert(ainfo.procedure != nil)
out: AllocatorQueryInfo; resolve_allocator_proc(ainfo.procedure)({data = ainfo.data, op = .Query, loc = loc}, transmute(^AllocatorProc_Out) & out)
return out
}
mem_free_ainfo :: proc(mem: []byte, ainfo:= context.allocator, loc := #caller_location) {
assert(ainfo.procedure != nil)
resolve_allocator_proc(ainfo.procedure)({data = ainfo.data, op = .Free, old_allocation = mem, loc = loc}, & {})
}
mem_reset :: proc(ainfo := context.allocator, loc := #caller_location) {
assert(ainfo.procedure != nil)
resolve_allocator_proc(ainfo.procedure)({data = ainfo.data, op = .Reset, loc = loc}, &{})
}
mem_rewind :: proc(ainfo := context.allocator, save_point: AllocatorSP, loc := #caller_location) {
assert(ainfo.procedure != nil)
resolve_allocator_proc(ainfo.procedure)({data = ainfo.data, op = .Rewind, save_point = save_point, loc = loc}, & {})
}
mem_save_point :: proc(ainfo := context.allocator, loc := #caller_location) -> AllocatorSP {
assert(ainfo.procedure != nil)
out: AllocatorProc_Out; resolve_allocator_proc(ainfo.procedure)({data = ainfo.data, op = .SavePoint, loc = loc}, & out)
return out.save_point
}
mem_alloc :: proc(size: int, alignment: int = DEFAULT_ALIGNMENT, no_zero: bool = false, ainfo: $Type = context.allocator, loc := #caller_location) -> ([]byte, AllocatorError) {
assert(ainfo.procedure != nil)
input := AllocatorProc_In {
data = (transmute(^AllocatorInfo)allocator_data).data,
data = ainfo.data,
op = no_zero ? .Alloc_NoZero : .Alloc,
requested_size = size,
alignment = alignment,
old_allocation = slice(transmute([^]byte)old_memory, old_size),
op = odin_allocator_mode_to_allocator_op(mode, size - old_size),
loc = loc,
}
output: AllocatorProc_Out
resolve_allocator_proc((transmute(^Odin_Allocator)allocator_data).procedure)(input, & output)
#partial switch mode {
case .Query_Features:
debug_trap() // TODO(Ed): Finish this...
return nil, nil
case .Query_Info:
info := (^Odin_AllocatorQueryInfo)(old_memory)
if info != nil && info.pointer != nil {
info.size = output.left
info.alignment = cast(int) (transmute(AllocatorQueryInfo)output).alignment
return slice(transmute(^byte)info, size_of(info^) ), nil
}
return nil, nil
output: AllocatorProc_Out; resolve_allocator_proc(ainfo.procedure)(input, & output)
return output.allocation, output.error
}
mem_grow :: proc(mem: []byte, size: int, alignment: int = DEFAULT_ALIGNMENT, no_zero: bool = false, ainfo := context.allocator, loc := #caller_location) -> ([]byte, AllocatorError) {
assert(ainfo.procedure != nil)
input := AllocatorProc_In {
data = ainfo.data,
op = no_zero ? .Grow_NoZero : .Grow,
requested_size = size,
alignment = alignment,
old_allocation = mem,
loc = loc,
}
return output.allocation, cast(Odin_AllocatorError)output.error
output: AllocatorProc_Out; resolve_allocator_proc(ainfo.procedure)(input, & output)
return output.allocation, output.error
}
mem_resize :: proc(mem: []byte, size: int, alignment: int = DEFAULT_ALIGNMENT, no_zero: bool = false, ainfo := context.allocator, loc := #caller_location) -> ([]byte, AllocatorError) {
assert(ainfo.procedure != nil)
input := AllocatorProc_In {
data = ainfo.data,
op = len(mem) < size ? .Shrink : no_zero ? .Grow_NoZero : .Grow,
requested_size = size,
alignment = alignment,
old_allocation = mem,
loc = loc,
}
output: AllocatorProc_Out; resolve_allocator_proc(ainfo.procedure)(input, & output)
return output.allocation, output.error
}
mem_shrink :: proc(mem: []byte, size: int, alignment: int = DEFAULT_ALIGNMENT, no_zero: bool = false, ainfo := context.allocator, loc := #caller_location) -> ([]byte, AllocatorError) {
assert(ainfo.procedure != nil)
input := AllocatorProc_In {
data = ainfo.data,
op = .Shrink,
requested_size = size,
alignment = alignment,
old_allocation = mem,
loc = loc,
}
output: AllocatorProc_Out; resolve_allocator_proc(ainfo.procedure)(input, & output)
return output.allocation, output.error
}
odin_ainfo_giftwrap :: #force_inline proc(ainfo := context.allocator) -> Odin_Allocator {
@(thread_local)
cursed_allocator_wrap_ref : Odin_Allocator
cursed_allocator_wrap_ref = {ainfo.procedure, ainfo.data}
return {odin_allocator_wrap_proc, & cursed_allocator_wrap_ref}
alloc_type :: proc($Type: typeid, alignment: int = DEFAULT_ALIGNMENT, no_zero: bool = false, ainfo := context.allocator, loc := #caller_location) -> (^Type, AllocatorError) {
assert(ainfo.procedure != nil)
input := AllocatorProc_In {
data = ainfo.data,
op = no_zero ? .Alloc_NoZero : .Alloc,
requested_size = size_of(Type),
alignment = alignment,
loc = loc,
}
output: AllocatorProc_Out; resolve_allocator_proc(ainfo.procedure)(input, & output)
return transmute(^Type) raw_data(output.allocation), output.error
}
alloc_slice :: proc($SliceType: typeid / []$Type, num: int, alignment: int = DEFAULT_ALIGNMENT, no_zero: bool = false, ainfo := context.allocator, loc := #caller_location) -> ([]Type, AllocatorError) {
assert(ainfo.procedure != nil)
input := AllocatorProc_In {
data = ainfo.data,
op = no_zero ? .Alloc_NoZero : .Alloc,
requested_size = size_of(Type) * num,
alignment = alignment,
loc = loc,
}
output: AllocatorProc_Out; resolve_allocator_proc(ainfo.procedure)(input, & output)
return transmute([]Type) slice(raw_data(output.allocation), num), output.error
}

View File

@@ -1 +0,0 @@
package grime

26
code2/grime/assert.odin Normal file
View File

@@ -0,0 +1,26 @@
package grime
// TODO(Ed): Below should be defined per-package?
ensure :: #force_inline proc(condition: bool, msg := #caller_expression, location := #caller_location) -> bool {
if condition == false do return false
log_print( msg, LoggerLevel.Warning, location )
when ODIN_DEBUG == false do return true
else {
debug_trap()
return true
}
}
// TODO(Ed) : Setup exit codes!
fatal :: #force_inline proc(msg: string, exit_code: int = -1, location := #caller_location) {
log_print( msg, LoggerLevel.Fatal, location )
debug_trap()
process_exit( exit_code )
}
// TODO(Ed) : Setup exit codes!
verify :: #force_inline proc(condition: bool, msg: string, exit_code: int = -1, location := #caller_location) -> bool {
if condition do return true
log_print( msg, LoggerLevel.Fatal, location )
debug_trap()
process_exit( exit_code )
}

View File

@@ -1,6 +1,6 @@
package grime
Context :: struct {
OdinContext :: struct {
allocator: AllocatorInfo,
temp_allocator: AllocatorInfo,
assertion_failure_proc: Assertion_Failure_Proc,
@@ -14,6 +14,6 @@ Context :: struct {
_internal: rawptr,
}
context_usr :: #force_inline proc( $ Type : typeid ) -> (^Type) {
context_user :: #force_inline proc( $ Type : typeid ) -> (^Type) {
return cast(^Type) context.user_ptr
}

View File

@@ -5,25 +5,23 @@ Based on gencpp's and thus zpl's Array implementation
Made becasue of the map issue with fonts during hot-reload.
I didn't want to make the HMapZPL impl with the [dynamic] array for now to isolate the hot-reload issue (when I was diagnoising)
Update 2024-5-26:
TODO(Ed): Raw_Dynamic_Array is defined within base:runtime/core.odin and exposes what we need for worst case hot-reloads.
Note 2024-5-26:
Raw_Dynamic_Array is defined within base:runtime/core.odin and exposes what we need for worst case hot-reloads.
So its best to go back to regular dynamic arrays at some point.
Update 2025-5-12:
Note 2025-5-12:
I can use either... so I'll just keep both
*/
ArrayHeader :: struct ( $ Type : typeid ) {
backing : Odin_Allocator,
dbg_name : string,
fixed_cap : b32,
capacity : int,
num : int,
data : [^]Type,
ArrayHeader :: struct ($Type: typeid) {
backing: Odin_Allocator,
dbg_name: string,
fixed_cap: b64,
capacity: int,
num: int,
data: [^]Type,
}
Array :: struct ( $ Type : typeid ) {
using header : ^ArrayHeader(Type),
Array :: struct ($Type: typeid) {
using header: ^ArrayHeader(Type),
}
array_underlying_slice :: proc(s: []($ Type)) -> Array(Type) {
@@ -32,32 +30,160 @@ array_underlying_slice :: proc(s: []($ Type)) -> Array(Type) {
array := cursor(to_bytes(s))[ - header_size]
return
}
array_to_slice :: #force_inline proc "contextless" ( using self : Array($ Type) ) -> []Type { return slice( data, int(num)) }
array_to_slice_capacity :: #force_inline proc "contextless" ( using self : Array($ Type) ) -> []Type { return slice( data, int(capacity)) }
array_grow_formula :: proc( value : u64 ) -> u64 {
result := (2 * value) + 8
return result
}
array_grow_formula :: #force_inline proc "contextless" (value: int) -> int { return (2 * value) + 8 }
array_block_size :: #force_inline proc "contextless" (self: Array($Type)) -> int { return size_of(ArrayHeader(Type)) + self.capacity * size_of(Type) }
array_init :: proc( $Array_Type : typeid/Array($Type), capacity : u64,
allocator := context.allocator, fixed_cap : b32 = false, dbg_name : string = ""
) -> ( result : Array(Type), alloc_error : AllocatorError )
//region Lifetime & Memory Resize Operations
array_init :: proc( $Array_Type : typeid / Array($Type), capacity: int,
allocator := context.allocator, fixed_cap: bool = false, dbg_name: string = ""
) -> (result: Array(Type), alloc_error: AllocatorError)
{
header_size := size_of(ArrayHeader(Type))
array_size := header_size + int(capacity) * size_of(Type)
raw_mem : rawptr
raw_mem, alloc_error = alloc( array_size, allocator = allocator )
array_size := size_of(ArrayHeader(Type)) + int(capacity) * size_of(Type)
raw_mem: []byte
raw_mem, alloc_error = mem_alloc(array_size, ainfo = allocator)
// log( str_fmt_tmp("array reserved: %d", header_size + int(capacity) * size_of(Type) ))
if alloc_error != AllocatorError.None do return
result.header = cast( ^ArrayHeader(Type)) raw_mem
result.header = transmute( ^ArrayHeader(Type)) cursor(raw_mem)
result.backing = allocator
result.dbg_name = dbg_name
result.fixed_cap = fixed_cap
result.fixed_cap = cast(b64) fixed_cap
result.capacity = capacity
result.data = cast( [^]Type ) (cast( [^]ArrayHeader(Type)) result.header)[ 1:]
result.data = transmute( [^]Type ) (transmute( [^]ArrayHeader(Type)) result.header)[ 1:]
return
}
}
array_free :: proc(self: Array($Type)) {
free(self.header, backing)
self.data = nil
}
array_grow :: proc(self: ^Array($Type), min_capacity: int) -> AllocatorError {
new_capacity := array_grow_formula(self.capacity)
if new_capacity < min_capacity do new_capacity = min_capacity
return array_set_capacity( self, new_capacity )
}
array_resize :: proc(self: ^Array($Type), num: int) -> AllocatorError {
if array.capacity < num {
grow_result := array_grow( array, array.capacity )
if grow_result != AllocatorError.None do return grow_result
}
array.num = num
return AllocatorError.None
}
array_set_capacity :: proc( self : ^Array( $ Type ), new_capacity: int) -> AllocatorError
{
if new_capacity == self.capacity do return AllocatorError.None
if new_capacity < self.num { self.num = new_capacity; return AllocatorError.None }
header_size :: size_of(ArrayHeader(Type))
new_size := header_size + new_capacity * size_of(Type)
old_size := header_size + self.capacity * size_of(Type)
new_mem, result_code := mem_resize( slice(transmute(^u8)self.header, old_size), new_size, DEFAULT_ALIGNMENT, ainfo = self.backing )
if ensure( result_code == AllocatorError.None, "Failed to allocate for new array capacity" ) {
log_print( "Failed to allocate for new array capacity", level = LoggerLevel.Warning )
return result_code
}
if new_mem == nil { ensure(false, "new_mem is nil but no allocation error"); return result_code }
self.header = cast( ^ArrayHeader(Type)) raw_data(new_mem);
self.header.data = cast( [^]Type ) (cast( [^]ArrayHeader(Type)) self.header)[ 1:]
self.header.capacity = new_capacity
self.header.num = self.num
return result_code
}
//endregion Lifetime & Memory Resize Operations
// Assumes non-overlapping memory for items and appendee
array_append_array :: proc(self: ^Array($Type), other : Array(Type)) -> AllocatorError {
if self.num + other.num > self.capacity {
grow_result := array_grow( self, self.num + other.num )
if grow_result != AllocatorError.None do return grow_result
}
copy_non_overlaping(self.data[self.num:], other.data, other.num)
num += other.num
return AllocatorError.None
}
// Assume non-overlapping memory for items and appendee
array_append_slice :: proc(self : ^Array($Type), items: []Type) -> AllocatorError {
// items_num := u64(len(items))
if num + len(items) > capacity {
grow_result := array_grow(self, num + len(items))
if grow_result != AllocatorError.None do return grow_result
}
copy_non_overlaping(self.data[self.num:], cursor(items), len(items))
num += items_num
return AllocatorError.None
}
array_append_value :: proc(self: ^Array($Type), value: Type) -> AllocatorError {
if self.header.num == self.header.capacity {
grow_result := array_grow( self, self.header.capacity )
if grow_result != AllocatorError.None do return grow_result
}
self.header.data[ self.header.num ] = value
self.header.num += 1
return AllocatorError.None
}
// Asumes non-overlapping for items.
array_append_at_slice :: proc(self : ^Array($Type ), items: []Type, id: int) -> AllocatorError {
assert(id < self.num, "Why are we doing an append at beyond the bounds of the current element count")
id := id
if id >= self.num { return array_append_slice(items) }
if len(items) > self.capacity {
grow_result := array_grow( self, self.capacity )
if grow_result != AllocatorError.None do return grow_result
}
// TODO(Ed) : VERIFY VIA DEBUG THIS COPY IS FINE
ensure(false, "time to check....")
mem_copy (self.data[id + len(items):], self.data[id:], (self.num - id) * size_of(Type))
mem_copy_non_overlaping(self.data[id:], cursor(items), len(items) * size_of(Type) )
self.num += len(items)
return AllocatorError.None
}
array_append_at_value :: proc(self: ^Array($Type), item: Type, id: int) -> AllocatorError {
assert(id < self.num, "Why are we doing an append at beyond the bounds of the current element count")
id := id; {
// TODO(Ed): Not sure I want this...
if id >= self.num do id = self.num
if id < 0 do id = 0
}
if self.capacity < self.num + 1 {
grow_result := array_grow( self, self.capacity )
if grow_result != AllocatorError.None do return grow_result
}
mem_copy(self.data[id + 1:], self.data[id:], int(self.num - id) * size_of(Type))
self.data[id] = item
self.num += 1
return AllocatorError.None
}
array_back :: #force_inline proc "contextless" (self : Array($Type)) -> Type { assert_contextless(self.num > 0); return self.data[self.num - 1] }
array_clear :: #force_inline proc "contextless" (self: Array($Type), zero_data: bool = false) {
if zero_data do zero(self.data, int(self.num) * size_of(Type))
self.num = 0
}
array_fill :: proc(self: Array($Type), begin, end: u64, value: Type) -> bool {
assert(end - begin <= num)
assert(end <= num)
if (end - begin > num) || (end > num) do return false
mem_fill(data[begin:], value, end - begin)
return true
}
// Will push value into the array (will not grow if at capacity, use append instead for when that matters)
array_push :: #force_inline proc "contextless" (self: Array($Type)) -> bool {
if self.num == self.capacity { return false }
self.data[self.num] = value
self.num += 1
return true
}
array_remove_at :: proc(self: Array($Type), id: int) {
assert( id < self.num, "Attempted to remove from an index larger than the array" )
mem_copy(self.data[id:], self.data[id + 1:], (self.num - id) * size_of(Type))
self.num -= 1
}

View File

@@ -1,125 +0,0 @@
package grime
FArena :: struct {
mem: []byte,
used: int,
}
@require_results
farena_make :: proc(backing: []byte) -> FArena {
arena := FArena {mem = backing}
return arena
}
farena_init :: proc(arena: ^FArena, backing: []byte) {
assert(arena != nil)
arena.mem = backing
arena.used = 0
}
@require_results
farena_push :: proc(arena: ^FArena, $Type: typeid, amount: int, alignment: int = MEMORY_ALIGNMENT_DEFAULT) -> []Type {
assert(arena != nil)
if amount == 0 {
return {}
}
desired := size_of(Type) * amount
to_commit := align_pow2(desired, alignment)
unused := len(arena.mem) - arena.used
assert(to_commit <= unused)
ptr := cursor(arena.mem[arena.used:])
arena.used += to_commit
return slice(ptr, amount)
}
@require_results
farena_grow :: proc(arena: ^FArena, old_allocation: []byte, requested_size: int, alignment: int = MEMORY_ALIGNMENT_DEFAULT, should_zero: bool = true) -> (allocation: []byte, err: AllocatorError) {
if len(old_allocation) == 0 {
return {}, .Invalid_Argument
}
alloc_end := end(old_allocation)
arena_end := cursor(arena.mem)[arena.used:]
if alloc_end != arena_end {
// Not at the end, can't grow in place
return {}, .Out_Of_Memory
}
// Calculate growth
grow_amount := requested_size - len(old_allocation)
aligned_grow := align_pow2(grow_amount, alignment)
unused := len(arena.mem) - arena.used
if aligned_grow > unused {
// Not enough space
return {}, .Out_Of_Memory
}
arena.used += aligned_grow
allocation = slice(cursor(old_allocation), requested_size)
if should_zero {
mem_zero( cursor(allocation)[len(old_allocation):], grow_amount )
}
err = .None
return
}
@require_results
farena_shirnk :: proc(arena: ^FArena, old_allocation: []byte, requested_size: int, alignment: int = MEMORY_ALIGNMENT_DEFAULT) -> (allocation: []byte, err: AllocatorError) {
if len(old_allocation) == 0 {
return {}, .Invalid_Argument
}
alloc_end := end(old_allocation)
arena_end := cursor(arena.mem)[arena.used:]
if alloc_end != arena_end {
// Not at the end, can't shrink but return adjusted size
allocation = old_allocation[:requested_size]
err = .None
return
}
// Calculate shrinkage
aligned_original := align_pow2(len(old_allocation), MEMORY_ALIGNMENT_DEFAULT)
aligned_new := align_pow2(requested_size, alignment)
arena.used -= (aligned_original - aligned_new)
allocation = old_allocation[:requested_size]
return
}
farena_reset :: proc(arena: ^FArena) {
arena.used = 0
}
farena_rewind :: proc(arena: ^FArena, save_point: AllocatorSP) {
assert(save_point.type_sig == farena_allocator_proc)
assert(save_point.slot >= 0 && save_point.slot <= arena.used)
arena.used = save_point.slot
}
farena_save :: #force_inline proc(arena: FArena) -> AllocatorSP { return AllocatorSP { type_sig = farena_allocator_proc, slot = arena.used } }
farena_allocator_proc :: proc(input: AllocatorProc_In, output: ^AllocatorProc_Out) {
assert(output != nil)
assert(input.data != nil)
arena := transmute(^FArena) input.data
switch input.op
{
case .Alloc, .Alloc_NoZero:
output.allocation = to_bytes(farena_push(arena, byte, input.requested_size, input.alignment))
if input.op == .Alloc {
zero(output.allocation)
}
case .Free:
// No-op for arena
case .Reset:
farena_reset(arena)
case .Grow, .Grow_NoZero:
output.allocation, output.error = farena_grow(arena, input.old_allocation, input.requested_size, input.alignment, input.op == .Grow)
case .Shrink:
output.allocation, output.error = farena_shirnk(arena, input.old_allocation, input.requested_size, input.alignment)
case .Rewind:
farena_rewind(arena, input.save_point)
case .SavePoint:
output.save_point = farena_save(arena^)
case .Query:
output.features = {.Alloc, .Reset, .Grow, .Shrink, .Rewind}
output.max_alloc = len(arena.mem) - arena.used
output.min_alloc = 0
output.left = output.max_alloc
output.save_point = farena_save(arena^)
}
}
when ODIN_DEBUG {
farena_ainfo :: #force_inline proc "contextless" (arena: ^FArena) -> AllocatorInfo { return AllocatorInfo{proc_id = .FArena, data = arena} }
farena_allocator :: #force_inline proc "contextless" (arena: ^FArena) -> Odin_Allocator { return transmute(Odin_Allocator) AllocatorInfo{proc_id = .FArena, data = arena} }
}
else {
farena_ainfo :: #force_inline proc "contextless" (arena: ^FArena) -> AllocatorInfo { return AllocatorInfo{procedure = farena_allocator_proc, data = arena} }
farena_allocator :: #force_inline proc "contextless" (arena: ^FArena) -> Odin_Allocator { return transmute(Odin_Allocator) AllocatorInfo{procedure = farena_allocator_proc, data = arena} }
}

View File

@@ -1,14 +1,14 @@
package grime
// TODO(Ed): Review when os2 is done.
// TODO(Ed): Make an async option...
// TODO(Ed): Make an async option?
file_copy_sync :: proc( path_src, path_dst: string, allocator := context.allocator ) -> b32
{
file_size : i64
{
path_info, result := file_status( path_src, allocator )
if result != OS_ERROR_NONE {
log_fmt("Could not get file info: %v", result, LoggerLevel.Error )
log_print_fmt("Could not get file info: %v", result, LoggerLevel.Error )
return false
}
file_size = path_info.size
@@ -16,14 +16,14 @@ file_copy_sync :: proc( path_src, path_dst: string, allocator := context.allocat
src_content, result := file_read_entire( path_src, allocator )
if ! result {
log_fmt( "Failed to read file to copy: %v", path_src, LoggerLevel.Error )
log_print_fmt( "Failed to read file to copy: %v", path_src, LoggerLevel.Error )
debug_trap()
return false
}
result = file_write_entire( path_dst, src_content, false )
if ! result {
log_fmt( "Failed to copy file: %v", path_dst, LoggerLevel.Error )
log_print_fmt( "Failed to copy file: %v", path_dst, LoggerLevel.Error )
debug_trap()
return false
}

View File

@@ -0,0 +1,186 @@
package grime
/* Fixed Arena Allocator (Fixed-szie block bump allocator) */
FArena :: struct {
mem: []byte,
used: int,
}
@require_results
farena_make :: proc "contextless" (backing: []byte) -> FArena {
arena := FArena {mem = backing}
return arena
}
farena_init :: proc "contextless" (arena: ^FArena, backing: []byte) {
assert_contextless(arena != nil)
arena.mem = backing
arena.used = 0
}
@require_results
farena_push :: proc "contextless" (arena: ^FArena, $Type: typeid, amount: int, alignment: int = DEFAULT_ALIGNMENT, loc := #caller_location) -> ([]Type, AllocatorError) {
assert_contextless(arena != nil)
if amount == 0 {
return {}, .None
}
desired := size_of(Type) * amount
to_commit := align_pow2(desired, alignment)
unused := len(arena.mem) - arena.used
if (to_commit <= unused) {
return {}, .Out_Of_Memory
}
arena.used += to_commit
return slice(cursor(arena.mem)[arena.used:], amount), .None
}
@require_results
farena_grow :: proc "contextless" (arena: ^FArena, old_allocation: []byte, requested_size: int, alignment: int = DEFAULT_ALIGNMENT, should_zero: bool = true, loc := #caller_location) -> (allocation: []byte, err: AllocatorError) {
assert_contextless(arena != nil)
if len(old_allocation) == 0 {
return {}, .Invalid_Argument
}
alloc_end := end(old_allocation)
arena_end := cursor(arena.mem)[arena.used:]
if alloc_end != arena_end {
return {}, .Out_Of_Memory
}
// Calculate growth
grow_amount := requested_size - len(old_allocation)
aligned_grow := align_pow2(grow_amount, alignment)
unused := len(arena.mem) - arena.used
if aligned_grow > unused {
return {}, .Out_Of_Memory
}
arena.used += aligned_grow
allocation = slice(cursor(old_allocation), requested_size)
if should_zero {
mem_zero( cursor(allocation)[len(old_allocation):], grow_amount )
}
err = .None
return
}
@require_results
farena_shirnk :: proc "contextless" (arena: ^FArena, old_allocation: []byte, requested_size: int, alignment: int = DEFAULT_ALIGNMENT, loc := #caller_location) -> (allocation: []byte, err: AllocatorError) {
assert_contextless(arena != nil)
if len(old_allocation) == 0 {
return {}, .Invalid_Argument
}
alloc_end := end(old_allocation)
arena_end := cursor(arena.mem)[arena.used:]
if alloc_end != arena_end {
// Not at the end, can't shrink but return adjusted size
return old_allocation[:requested_size], .None
}
// Calculate shrinkage
aligned_original := align_pow2(len(old_allocation), DEFAULT_ALIGNMENT)
aligned_new := align_pow2(requested_size, alignment)
arena.used -= (aligned_original - aligned_new)
return old_allocation[:requested_size], .None
}
farena_reset :: #force_inline proc "contextless" (arena: ^FArena, loc := #caller_location) {
assert_contextless(arena != nil)
arena.used = 0
}
farena_rewind :: #force_inline proc "contextless" (arena: ^FArena, save_point: AllocatorSP, loc := #caller_location) {
assert_contextless(save_point.type_sig == farena_allocator_proc)
assert_contextless(save_point.slot >= 0 && save_point.slot <= arena.used)
arena.used = save_point.slot
}
farena_save :: #force_inline proc "contextless" (arena: FArena) -> AllocatorSP { return AllocatorSP { type_sig = farena_allocator_proc, slot = arena.used } }
farena_is_owner :: #force_inline proc "contextless" (arena: FArena, memory: []byte) -> bool {
p0 := transmute(uintptr) cursor(memory)
p1 := transmute(uintptr) end(memory)
arena_p0 := transmute(uintptr) cursor(arena.mem)
arena_p1 := cast(uintptr) arena.used
return arena_p0 <= p0 && arena_p1 >= p1
}
farena_allocator_proc :: proc(input: AllocatorProc_In, output: ^AllocatorProc_Out) {
assert_contextless(output != nil)
assert_contextless(input.data != nil)
arena := transmute(^FArena) input.data
switch input.op {
case .Alloc, .Alloc_NoZero:
output.allocation, output.error = farena_push(arena, byte, input.requested_size, input.alignment, input.loc)
if input.op == .Alloc {
zero(output.allocation)
}
return
case .Free:
// No-op for arena
return
case .Reset:
farena_reset(arena)
return
case .Grow, .Grow_NoZero:
output.allocation, output.error = farena_grow(arena, input.old_allocation, input.requested_size, input.alignment, input.op == .Grow)
return
case .Shrink:
output.allocation, output.error = farena_shirnk(arena, input.old_allocation, input.requested_size, input.alignment)
return
case .Rewind:
farena_rewind(arena, input.save_point)
return
case .SavePoint:
output.save_point = farena_save(arena^)
return
case .Query:
output.features = {.Alloc, .Reset, .Grow, .Shrink, .Rewind, .Actually_Resize, .Is_Owner, .Hint_Fast_Bump}
output.max_alloc = len(arena.mem) - arena.used
output.min_alloc = 0
output.left = output.max_alloc
output.save_point = farena_save(arena^)
return
case .Is_Owner:
output.error = farena_is_owner(arena ^, input.old_allocation) ? .Owner : .None
case .Startup, .Shutdown, .Thread_Start, .Thread_Stop:
output.error = .Mode_Not_Implemented
}
panic_contextless("Impossible path")
}
farena_odin_allocator_proc :: proc(
allocator_data : rawptr,
mode : Odin_AllocatorMode,
size : int,
alignment : int,
old_memory : rawptr,
old_size : int,
location : SourceCodeLocation = #caller_location
) -> ( data : []byte, error : Odin_AllocatorError)
{
error_: AllocatorError
assert_contextless(allocator_data != nil)
arena := transmute(^FArena) allocator_data
switch mode {
case .Alloc, .Alloc_Non_Zeroed:
data, error_ = farena_push(arena, byte, size, alignment, location)
if mode == .Alloc {
zero(data)
}
case .Free:
return {}, .Mode_Not_Implemented
case .Free_All:
farena_reset(arena)
case .Resize, .Resize_Non_Zeroed:
if (size > old_size) do data, error_ = farena_grow (arena, slice(cursor(old_memory), old_size), size, alignment, mode == .Resize)
else do data, error_ = farena_shirnk(arena, slice(cursor(old_memory), old_size), size, alignment)
case .Query_Features:
set := (^Odin_AllocatorModeSet)(old_memory)
if set != nil {
set^ = {.Alloc, .Alloc_Non_Zeroed, .Free_All, .Resize, .Resize_Non_Zeroed, .Query_Features, .Query_Info}
}
case .Query_Info:
info := (^Odin_AllocatorQueryInfo)(old_memory)
info.pointer = transmute(rawptr) farena_save(arena^).slot
info.size = len(arena.mem) - arena.used
info.alignment = DEFAULT_ALIGNMENT
return to_bytes(info), nil
}
error = transmute(Odin_AllocatorError) error_
return
}
when ODIN_DEBUG {
farena_ainfo :: #force_inline proc "contextless" (arena: ^FArena) -> AllocatorInfo { return AllocatorInfo{proc_id = .FArena, data = arena} }
farena_allocator :: #force_inline proc "contextless" (arena: ^FArena) -> Odin_Allocator { return transmute(Odin_Allocator) AllocatorInfo{proc_id = .FArena, data = arena} }
}
else {
farena_ainfo :: #force_inline proc "contextless" (arena: ^FArena) -> AllocatorInfo { return AllocatorInfo{procedure = farena_allocator_proc, data = arena} }
farena_allocator :: #force_inline proc "contextless" (arena: ^FArena) -> Odin_Allocator { return transmute(Odin_Allocator) AllocatorInfo{procedure = farena_allocator_proc, data = arena} }
}

View File

@@ -0,0 +1,126 @@
package grime
FRingBuffer :: struct( $Type: typeid, $Size: u32 ) {
head : u32,
tail : u32,
num : u32,
items : [Size] Type,
}
ringbuf_fixed_cslear :: #force_inline proc "contextless" (ring: ^FRingBuffer($Type, $Size)) { ring.head = 0; ring.tail = 0; ring.num = 0 }
ringbuf_fixed_is_full :: #force_inline proc "contextless" (ring: FRingBuffer($Type, $Size)) -> bool { return ring.num == ring.Size }
ringbuf_fixed_is_empty :: #force_inline proc "contextless" (ring: FRingBuffer($Type, $Size)) -> bool { return ring.num == 0 }
ringbuf_fixed_peek_front_ref :: #force_inline proc "contextless" (using buffer: ^FRingBuffer($Type, $Size)) -> ^Type {
assert_contextless(num > 0, "Attempted to peek an empty ring buffer")
return & items[ head ]
}
ringbuf_fixed_peek_front :: #force_inline proc "contextless" ( using buffer : FRingBuffer( $Type, $Size)) -> Type {
assert_contextless(num > 0, "Attempted to peek an empty ring buffer")
return items[ head ]
}
ringbuf_fixed_peak_back :: #force_inline proc (using buffer : FRingBuffer( $Type, $Size)) -> Type {
assert_contextless(num > 0, "Attempted to peek an empty ring buffer")
buf_size := u32(Size)
index := (tail - 1 + buf_size) % buf_size
return items[ index ]
}
ringbuf_fixed_push :: #force_inline proc(using buffer: ^FRingBuffer($Type, $Size), value: Type) {
if num == Size do head = (head + 1) % Size
else do num += 1
items[ tail ] = value
tail = (tail + 1) % Size
}
ringbuf_fixed_push_slice :: proc "contextless" (buffer: ^FRingBuffer($Type, $Size), slice: []Type) -> u32
{
size := u32(Size)
slice_size := u32(len(slice))
assert_contextless( slice_size <= size, "Attempting to append a slice that is larger than the ring buffer!" )
if slice_size == 0 do return 0
items_to_add := min( slice_size, size)
items_added : u32 = 0
if items_to_add > Size - buffer.num {
// Some or all existing items will be overwritten
overwrite_count := items_to_add - (Size - buffer.num)
buffer.head = (buffer.head + overwrite_count) % size
buffer.num = size
}
else {
buffer.num += items_to_add
}
if items_to_add <= size {
// Case 1: Slice fits entirely or partially in the buffer
space_to_end := size - buffer.tail
first_chunk := min(items_to_add, space_to_end)
// First copy: from tail to end of buffer
copy( buffer.items[ buffer.tail: ] , slice[ :first_chunk ] )
if first_chunk < items_to_add {
// Second copy: wrap around to start of buffer
second_chunk := items_to_add - first_chunk
copy( buffer.items[:], slice[ first_chunk : items_to_add ] )
}
buffer.tail = (buffer.tail + items_to_add) % Size
items_added = items_to_add
}
else
{
// Case 2: Slice is larger than buffer, only keep last Size elements
to_add := slice[ slice_size - size: ]
// First copy: from start of buffer to end
first_chunk := min(Size, u32(len(to_add)))
copy( buffer.items[:], to_add[ :first_chunk ] )
if first_chunk < Size {
// Second copy: wrap around
copy( buffer.items[ first_chunk: ], to_add[ first_chunk: ] )
}
buffer.head = 0
buffer.tail = 0
buffer.num = Size
items_added = Size
}
return items_added
}
ringbuf_fixed_pop :: #force_inline proc "contextless" (using buffer: ^FRingBuffer($Type, $Size)) -> Type {
assert_contextless(num > 0, "Attempted to pop an empty ring buffer")
value := items[ head ]
head = ( head + 1 ) % Size
num -= 1
return value
}
FRingBufferIterator :: struct($Type : typeid) {
items : []Type,
head : u32,
tail : u32,
index : u32,
remaining : u32,
}
iterator_ringbuf_fixed :: proc "contextless" (buffer: ^FRingBuffer($Type, $Size)) -> FRingBufferIterator(Type)
{
iter := FRingBufferIterator(Type){
items = buffer.items[:],
head = buffer.head,
tail = buffer.tail,
remaining = buffer.num,
}
buff_size := u32(Size)
if buffer.num > 0 {
// Start from the last pushed item (one before tail)
iter.index = (buffer.tail - 1 + buff_size) % buff_size
} else {
iter.index = buffer.tail // This will not be used as remaining is 0
}
return iter
}
next_ringbuf_fixed_iterator :: proc(iter: ^FRingBufferIterator($Type)) -> ^Type {
using iter; if remaining == 0 do return nil // If there are no items left to iterate over
buf_size := cast(u32) len(items)
result := &items[index]
// Decrement index and wrap around if necessary
index = (index - 1 + buf_size) % buf_size
remaining -= 1
return result
}

View File

@@ -0,0 +1,29 @@
package grime
FStack :: struct ($Type: typeid, $Size: u32) {
items: [Size]Type,
idx: u32,
}
stack_clear :: #force_inline proc "contextless" (stack: ^FStack($Type, $Size)) { stack.idx = 0 }
stack_push :: #force_inline proc "contextless" (stack: ^FStack($Type, $Size ), value: Type) {
assert_contextless(stack.idx < u32(len( stack.items )), "Attempted to push on a full stack")
stack.items[stack.idx] = value
stack.idx += 1
}
stack_pop :: #force_inline proc "contextless" (stack: ^FStack($Type, $Size)) {
assert(stack.idx > 0, "Attempted to pop an empty stack")
stack.idx -= 1
if stack.idx == 0 {
stack.items[stack.idx] = {}
}
}
stack_peek_ref :: #force_inline proc "contextless" (s: ^FStack($Type, $Size)) -> (^Type) {
return & s.items[/*last_idx*/ max( 0, s.idx - 1 )]
}
stack_peek :: #force_inline proc "contextless" (s: ^FStack($Type, $Size)) -> Type {
return s.items[/*last_idx*/ max( 0, s.idx - 1 )]
}
stack_push_contextless :: #force_inline proc "contextless" (s: ^FStack($Type, $Size), value: Type) {
s.items[s.idx] = value
s.idx += 1
}

View File

@@ -1,9 +1,20 @@
package grime
hash32_djb8 :: #force_inline proc "contextless" ( hash : ^u32, bytes : []byte ) {
hash32_djb8 :: #force_inline proc "contextless" (hash: ^u32, bytes: []byte ) {
for value in bytes do (hash^) = (( (hash^) << 8) + (hash^) ) + u32(value)
}
hash64_djb8 :: #force_inline proc "contextless" ( hash : ^u64, bytes : []byte ) {
hash64_djb8 :: #force_inline proc "contextless" (hash: ^u64, bytes: []byte ) {
for value in bytes do (hash^) = (( (hash^) << 8) + (hash^) ) + u64(value)
}
// Ripped from core:hash, fnv32a
@(optimization_mode="favor_size")
hash32_fnv1a :: #force_inline proc "contextless" (hash: ^u32, data: []byte, seed := u32(0x811c9dc5)) {
hash^ = seed; for b in data { hash^ = (hash^ ~ u32(b)) * 0x01000193 }
}
// Ripped from core:hash, fnv64a
@(optimization_mode="favor_size")
hash64_fnv1a :: #force_inline proc "contextless" (hash: ^u64, data: []byte, seed := u64(0xcbf29ce484222325)) {
hash^ = seed; for b in data { hash^ = (hash^ ~ u64(b)) * 0x100000001b3 }
}

View File

@@ -1,164 +0,0 @@
package grime
import "base:intrinsics"
/*
Key Table 1-Layer Chained-Chunked-Cells
*/
KT1CX_Slot :: struct($type: typeid) {
value: type,
key: u64,
occupied: b32,
}
KT1CX_Cell :: struct($type: typeid, $depth: int) {
slots: [depth]KT1CX_Slot(type),
next: ^KT1CX_Cell(type, depth),
}
KT1CX :: struct($cell: typeid) {
table: []cell,
}
KT1CX_Byte_Slot :: struct {
key: u64,
occupied: b32,
}
KT1CX_Byte_Cell :: struct {
next: ^byte,
}
KT1CX_Byte :: struct {
table: []byte,
}
KT1CX_ByteMeta :: struct {
slot_size: int,
slot_key_offset: uintptr,
cell_next_offset: uintptr,
cell_depth: int,
cell_size: int,
type_width: int,
type: typeid,
}
KT1CX_InfoMeta :: struct {
table_size: int,
slot_size: int,
slot_key_offset: uintptr,
cell_next_offset: uintptr,
cell_depth: int,
cell_size: int,
type_width: int,
type: typeid,
}
KT1CX_Info :: struct {
backing_table: AllocatorInfo,
}
kt1cx_init :: proc(info: KT1CX_Info, m: KT1CX_InfoMeta, result: ^KT1CX_Byte) {
assert(result != nil)
assert(info.backing_table.procedure != nil)
assert(m.cell_depth > 0)
assert(m.table_size >= 4 * Kilo)
assert(m.type_width > 0)
table_raw := transmute(SliceByte) mem_alloc(m.table_size * m.cell_size, ainfo = odin_allocator(info.backing_table))
slice_assert(transmute([]byte) table_raw)
table_raw.len = m.table_size
result.table = transmute([]byte) table_raw
}
kt1cx_clear :: proc(kt: KT1CX_Byte, m: KT1CX_ByteMeta) {
cell_cursor := cursor(kt.table)
table_len := len(kt.table) * m.cell_size
for ; cell_cursor != end(kt.table); cell_cursor = cell_cursor[m.cell_size:] // for cell, cell_id in kt.table.cells
{
slots := SliceByte { cell_cursor, m.cell_depth * m.slot_size } // slots = cell.slots
slot_cursor := slots.data
for;; {
slot := slice(slot_cursor, m.slot_size) // slot = slots[slot_id]
zero(slot) // slot = {}
if slot_cursor == end(slots) { // if slot == end(slot)
next := slot_cursor[m.cell_next_offset:] // next = kt.table.cells[cell_id + 1]
if next != nil { // if next != nil
slots.data = next // slots = next.slots
slot_cursor = next
continue
}
}
slot_cursor = slot_cursor[m.slot_size:] // slot = slots[slot_id + 1]
}
}
}
kt1cx_slot_id :: proc(kt: KT1CX_Byte, key: u64, m: KT1CX_ByteMeta) -> u64 {
cell_size := m.cell_size // dummy value
hash_index := key % u64(len(kt.table))
return hash_index
}
kt1cx_get :: proc(kt: KT1CX_Byte, key: u64, m: KT1CX_ByteMeta) -> ^byte {
hash_index := kt1cx_slot_id(kt, key, m)
cell_offset := uintptr(hash_index) * uintptr(m.cell_size)
cell_cursor := cursor(kt.table)[cell_offset:] // cell_id = 0
{
slots := slice(cell_cursor, m.cell_depth * m.slot_size) // slots = cell[cell_id].slots
slot_cursor := cell_cursor // slot_id = 0
for;;
{
slot := transmute(^KT1CX_Byte_Slot) slot_cursor[m.slot_key_offset:] // slot = cell[slot_id]
if slot.occupied && slot.key == key {
return cast(^byte) slot_cursor
}
if slot_cursor == end(slots)
{
cell_next := cell_cursor[m.cell_next_offset:] // cell.next
if cell_next != nil {
slots = slice(cell_next, len(slots)) // slots = cell.next
slot_cursor = cell_next
cell_cursor = cell_next // cell = cell.next
continue
}
else {
return nil
}
}
slot_cursor = slot_cursor[m.slot_size:]
}
}
}
kt1cx_set :: proc(kt: KT1CX_Byte, key: u64, value: []byte, backing_cells: AllocatorInfo, m: KT1CX_ByteMeta) -> ^byte {
hash_index := kt1cx_slot_id(kt, key, m)
cell_offset := uintptr(hash_index) * uintptr(m.cell_size)
cell_cursor := cursor(kt.table)[cell_offset:] // KT1CX_Cell(Type) cell = kt.table[hash_index]
{
slots := SliceByte {cell_cursor, m.cell_depth * m.slot_size} // cell.slots
slot_cursor := slots.data
for ;;
{
slot := transmute(^KT1CX_Byte_Slot) slot_cursor[m.slot_key_offset:]
if slot.occupied == false {
slot.occupied = true
slot.key = key
return cast(^byte) slot_cursor
}
else if slot.key == key {
return cast(^byte) slot_cursor
}
if slot_cursor == end(slots) {
curr_cell := transmute(^KT1CX_Byte_Cell) (uintptr(cell_cursor) + m.cell_next_offset) // curr_cell = cell
if curr_cell != nil {
slots.data = curr_cell.next
slot_cursor = curr_cell.next
cell_cursor = curr_cell.next
continue
}
else {
new_cell := mem_alloc(m.cell_size, ainfo = odin_allocator(backing_cells))
curr_cell.next = raw_data(new_cell)
slot = transmute(^KT1CX_Byte_Slot) cursor(new_cell)[m.slot_key_offset:]
slot.occupied = true
slot.key = key
return raw_data(new_cell)
}
}
slot_cursor = slot_cursor[m.slot_size:]
}
return nil
}
}
kt1cx_assert :: proc(kt: $type / KT1CX) {
slice_assert(kt.table)
}
kt1cx_byte :: proc(kt: $type / KT1CX) -> KT1CX_Byte { return { slice( transmute([^]byte) cursor(kt.table), len(kt.table)) } }

View File

@@ -1,48 +0,0 @@
package grime
/*
Key Table 1-Layer Linear (KT1L)
*/
KT1L_Slot :: struct($Type: typeid) {
key: u64,
value: Type,
}
KT1L_Meta :: struct {
slot_size: uintptr,
kt_value_offset: uintptr,
type_width: uintptr,
type: typeid,
}
kt1l_populate_slice_a2_Slice_Byte :: proc(kt: ^[]byte, backing: AllocatorInfo, values: []byte, num_values: int, m: KT1L_Meta) {
assert(kt != nil)
if num_values == 0 { return }
table_size_bytes := num_values * int(m.slot_size)
kt^ = mem_alloc(table_size_bytes, ainfo = transmute(Odin_Allocator) backing)
slice_assert(kt ^)
kt_raw : SliceByte = transmute(SliceByte) kt^
for id in 0 ..< cast(uintptr) num_values {
slot_offset := id * m.slot_size // slot id
slot_cursor := kt_raw.data[slot_offset:] // slots[id] type: KT1L_<Type>
// slot_key := transmute(^u64) slot_cursor // slots[id].key type: U64
// slot_value := slice(slot_cursor[m.kt_value_offset:], m.type_width) // slots[id].value type: <Type>
a2_offset := id * m.type_width * 2 // a2 entry id
a2_cursor := cursor(values)[a2_offset:] // a2_entries[id] type: A2_<Type>
// a2_key := (transmute(^[]byte) a2_cursor) ^ // a2_entries[id].key type: <Type>
// a2_value := slice(a2_cursor[m.type_width:], m.type_width) // a2_entries[id].value type: <Type>
mem_copy_non_overlapping(slot_cursor[m.kt_value_offset:], a2_cursor[m.type_width:], cast(int) m.type_width) // slots[id].value = a2_entries[id].value
(transmute([^]u64) slot_cursor)[0] = 0;
hash64_djb8(transmute(^u64) slot_cursor, (transmute(^[]byte) a2_cursor) ^) // slots[id].key = hash64_djb8(a2_entries[id].key)
}
kt_raw.len = num_values
}
kt1l_populate_slice_a2 :: proc($Type: typeid, kt: ^[]KT1L_Slot(Type), backing: AllocatorInfo, values: [][2]Type) {
assert(kt != nil)
values_bytes := slice(transmute([^]u8) raw_data(values), len(values) * size_of([2]Type))
kt1l_populate_slice_a2_Slice_Byte(transmute(^[]byte) kt, backing, values_bytes, len(values), {
slot_size = size_of(KT1L_Slot(Type)),
kt_value_offset = offset_of(KT1L_Slot(Type), value),
type_width = size_of(Type),
type = Type,
})
}

View File

@@ -0,0 +1,196 @@
package grime
import "base:intrinsics"
/*
Key Table Chained-Chunked-Cells
Table has a cell with a user-specified depth. Each cell will be a linear search if the first slot is occupied.
Table allocated cells are looked up by hash.
If a cell is exhausted additional are allocated singly-chained reporting to the user when it does with a "cell_overflow" counter.
Slots track occupacy with a tombstone (occupied signal).
If the table ever needs to change its size, it should be a wipe and full traversal of the arena holding the values..
or maybe a wipe of that arena as it may no longer be accessible.
Has a likely-hood of having cache misses (based on reading other impls about these kind of tables).
Odin's hash-map or Jai's are designed with open-addressing and prevent that.
Intended to be wrapped in parent interface (such as a string cache). Keys are hashed by the table's user.
The table is not intended to directly store the type's value in it's slots (expects the slot value to be some sort of reference).
The value should be stored in an arena.
Could be upgraded two a X-layer, not sure if its ever viable.
Would essentially be segmenting the hash to address a multi-layered table lookup.
Where one table leads to another hash resolving id for a subtable with linear search of cells after.
*/
KTCX_Slot :: struct($type: typeid) {
value: type,
key: u64,
occupied: b32,
}
KTCX_Cell :: struct($type: typeid, $depth: int) {
slots: [depth]KTCX_Slot(type),
next: ^KTCX_Cell(type, depth),
}
KTCX :: struct($cell: typeid) {
table: []cell,
cell_overflow: int,
}
KTCX_Byte_Slot :: struct {
key: u64,
occupied: b32,
}
KTCX_Byte_Cell :: struct {
next: ^byte,
}
KTCX_Byte :: struct {
table: []byte,
cell_overflow: int,
}
KTCX_ByteMeta :: struct {
slot_size: int,
slot_key_offset: uintptr,
cell_next_offset: uintptr,
cell_depth: int,
cell_size: int,
type_width: int,
type: typeid,
}
KTCX_Info :: struct {
table_size: int,
slot_size: int,
slot_key_offset: uintptr,
cell_next_offset: uintptr,
cell_depth: int,
cell_size: int,
type_width: int,
type: typeid,
}
ktcx_byte :: #force_inline proc "contextless" (kt: $type / KTCX) -> KTCX_Byte { return { slice( transmute([^]byte) cursor(kt.table), len(kt.table)) } }
ktcx_init_byte :: proc(result: ^KTCX_Byte, tbl_backing: Odin_Allocator, m: KTCX_Info) {
assert(result != nil)
assert(tbl_backing.procedure != nil)
assert(m.cell_depth > 0)
assert(m.table_size >= 4 * Kilo)
assert(m.type_width > 0)
table_raw, error := mem_alloc(m.table_size * m.cell_size, ainfo = tbl_backing)
assert(error == .None); slice_assert(transmute([]byte) table_raw)
(transmute(^SliceByte) & table_raw).len = m.table_size
result.table = table_raw
}
ktcx_clear :: proc(kt: KTCX_Byte, m: KTCX_ByteMeta) {
cell_cursor := cursor(kt.table)
table_len := len(kt.table) * m.cell_size
for ; cell_cursor != end(kt.table); cell_cursor = cell_cursor[m.cell_size:] // for cell, cell_id in kt.table.cells
{
slots := SliceByte { cell_cursor, m.cell_depth * m.slot_size } // slots = cell.slots
slot_cursor := slots.data
for;; {
slot := slice(slot_cursor, m.slot_size) // slot = slots[slot_id]
zero(slot) // slot = {}
if slot_cursor == end(slots) { // if slot == end(slot)
next := slot_cursor[m.cell_next_offset:] // next = kt.table.cells[cell_id + 1]
if next != nil { // if next != nil
slots.data = next // slots = next.slots
slot_cursor = next
continue
}
}
slot_cursor = slot_cursor[m.slot_size:] // slot = slots[slot_id + 1]
}
}
}
ktcx_slot_id :: #force_inline proc "contextless" (table: []byte, key: u64) -> u64 {
return key % u64(len(table))
}
ktcx_get :: proc(kt: KTCX_Byte, key: u64, m: KTCX_ByteMeta) -> ^byte {
hash_index := key % u64(len(kt.table)) // ktcx_slot_id
cell_offset := uintptr(hash_index) * uintptr(m.cell_size)
cell_cursor := cursor(kt.table)[cell_offset:] // cell_id = 0
{
slots := slice(cell_cursor, m.cell_depth * m.slot_size) // slots = cell[cell_id].slots
slot_cursor := cell_cursor // slot_id = 0
for;;
{
slot := transmute(^KTCX_Byte_Slot) slot_cursor[m.slot_key_offset:] // slot = cell[slot_id]
if slot.occupied && slot.key == key {
return cast(^byte) slot_cursor
}
if slot_cursor == end(slots)
{
cell_next := cell_cursor[m.cell_next_offset:] // cell.next
if cell_next != nil {
slots = slice(cell_next, len(slots)) // slots = cell.next
slot_cursor = cell_next
cell_cursor = cell_next // cell = cell.next
continue
}
else {
return nil
}
}
slot_cursor = slot_cursor[m.slot_size:]
}
}
}
ktcx_set :: proc(kt: ^KTCX_Byte, key: u64, value: []byte, backing_cells: Odin_Allocator, m: KTCX_ByteMeta) -> ^byte {
hash_index := key % u64(len(kt.table)) // ktcx_slot_id
cell_offset := uintptr(hash_index) * uintptr(m.cell_size)
cell_cursor := cursor(kt.table)[cell_offset:] // KTCX_Cell(Type) cell = kt.table[hash_index]
{
slots := SliceByte {cell_cursor, m.cell_depth * m.slot_size} // cell.slots
slot_cursor := slots.data
for ;;
{
slot := transmute(^KTCX_Byte_Slot) slot_cursor[m.slot_key_offset:]
if slot.occupied == false {
slot.occupied = true
slot.key = key
return cast(^byte) slot_cursor
}
else if slot.key == key {
return cast(^byte) slot_cursor
}
if slot_cursor == end(slots) {
curr_cell := transmute(^KTCX_Byte_Cell) (uintptr(cell_cursor) + m.cell_next_offset) // curr_cell = cell
if curr_cell != nil {
slots.data = curr_cell.next
slot_cursor = curr_cell.next
cell_cursor = curr_cell.next
continue
}
else {
ensure(false, "Exhausted a cell. Increase the table size?")
new_cell, _ := mem_alloc(m.cell_size, ainfo = backing_cells)
curr_cell.next = raw_data(new_cell)
slot = transmute(^KTCX_Byte_Slot) cursor(new_cell)[m.slot_key_offset:]
slot.occupied = true
slot.key = key
kt.cell_overflow += 1
return raw_data(new_cell)
}
}
slot_cursor = slot_cursor[m.slot_size:]
}
return nil
}
}
// Type aware wrappers
ktcx_init :: #force_inline proc(table_size: int, tbl_backing: Odin_Allocator,
kt: ^$kt_type / KTCX(KTCX_Cell(KTCX_Slot($Type), $Depth))
){
ktcx_init_byte(transmute(^KTCX_Byte) kt, tbl_backing, {
table_size = table_size,
slot_size = size_of(KTCX_Slot(Type)),
slot_key_offset = offset_of(KTCX_Slot(Type), key),
cell_next_offset = offset_of(KTCX_Cell(Type, Depth), next),
cell_depth = Depth,
cell_size = size_of(KTCX_Cell(Type, Depth)),
type_width = size_of(Type),
type = Type,
})
}

View File

@@ -0,0 +1,37 @@
package grime
/*
Key Table 1-Layer Linear (KT1L)
Mainly intended for doing linear lookup of key-paried values. IE: Arg value parsing with label ids.
The table is built in one go from the key-value pairs. The default populate slice_a2 has the key and value as the same type.
*/
KTL_Slot :: struct($Type: typeid) {
key: u64,
value: Type,
}
KTL_Meta :: struct {
slot_size: int,
kt_value_offset: int,
type_width: int,
type: typeid,
}
ktl_get :: #force_inline proc "contextless" (kt: []KTL_Slot($Type), key: u64) -> ^Type {
for & slot in kt { if key == slot.key do return & slot.value; }
return nil
}
// Unique populator for key-value pair strings
ktl_populate_slice_a2_str :: #force_inline proc(kt: ^[]KTL_Slot(string), backing: Odin_Allocator, values: [][2]string) {
assert(kt != nil)
if len(values) == 0 { return }
raw_bytes, error := mem_alloc(size_of(KTL_Slot(string)) * len(values), ainfo = backing); assert(error == .None);
kt^ = slice( transmute([^]KTL_Slot(string)) cursor(raw_bytes), len(raw_bytes) / size_of(KTL_Slot(string)) )
for id in 0 ..< len(values) {
mem_copy(& kt[id].value, & values[id][1], size_of(string))
hash64_fnv1a(& kt[id].key, transmute([]byte) values[id][0])
}
}

View File

@@ -0,0 +1,142 @@
package grime
/*
Hash Table based on John's Jai & Sean Barrett's
I don't like the table definition cntaining
the allocator, hash or compare procedure to be used.
So it has been stripped and instead applied on procedure site,
the parent container or is responsible for tracking that.
TODO(Ed): Resolve appropriate Key-Table term for it.
TODO(Ed): Complete this later if we actually want something beyond KT1CX or Odin's map.
*/
KT_Slot :: struct(
$TypeHash: typeid,
$TypeKey: typeid,
$TypeValue: typeid
) {
hash: TypeHash,
key: TypeKey,
value: TypeValue,
}
KT :: struct($KT_Slot: typeid) {
load_factor_perent: int,
count: int,
allocated: int,
slots_filled: int,
slots: []KT_Slot,
}
KT_Info :: struct {
key_width: int,
value_width: int,
slot_width: int,
}
KT_Opaque :: struct {
count: int,
allocated: int,
slots_filled: int,
slots: []byte,
}
KT_ByteMeta :: struct {
hash_width: int,
value_width: int,
}
KT_COUNT_COLLISIONS :: #config(KT_COUNT_COLLISIONS, false)
KT_HASH_NEVER_OCCUPIED :: 0
KT_HASH_REMOVED :: 1
KT_HASH_FIRST_VALID :: 2
KT_LOAD_FACTOR_PERCENT :: 70
kt_byte_init :: proc(info: KT_Info, tbl_allocator: Odin_Allocator, kt: ^KT_Opaque, $HashType: typeid)
{
#assert(size_of(HashType) >= 32)
assert(tbl_allocator.procedure != nil)
assert(info.value_width >= 32)
assert(info.slot_width >= 64)
}
kt_deinit :: proc(table: ^$KT / typeid, allocator: Odin_Allocator)
{
}
kt_walk_table_body_proc :: #type proc($TypeHash: typeid, hash: TypeHash, kt: ^KT_Opaque, info: KT_Info, id: TypeHash) -> (should_break: bool)
kt_walk_table :: proc($TypeHash: typeid, hash: TypeHash, kt: ^KT_Opaque, info: KT_Info, $walk_body: kt_walk_table_body_proc) -> (index: TypeHash)
{
mask := cast(TypeHash)(kt.allocated - 1) // Cast may truncate
if hash < KT_HASH_FIRST_VALID do hash += KT_HASH_FIRST_VALID
index : TypeHash = hash & mask
probe_increment: TypeHash = 1
for id := transmute(TypeHash) kt.slots[info.slot_width * index:]; id != 0;
{
if #force_inline walk_body(hash, kt, info, id) do break
index = (index + probe_increment) & mask
probe_increment += 1
}
}
// Will not expand table if capacity reached, user must do that check beforehand.
// Will return existing if hash found
kt_byte_add :: proc(value: [^]byte, key: [^]byte, hash: $TypeHash, kt: ^KT_Opaque, info: KT_Info)-> [^]byte
{
aasert(kt.slots_filled, kt.allocated)
index := #force_inline kt_walk_table(hash, kt, info,
proc(hash: $TypeHash, kt: ^KT_Opaque, info: KT_Info, id: TypeHash) -> (should_break: bool)
{
if id == KT_HASH_REMOVED {
kt.slots_filled -= 1
should_break = true
return
}
//TODO(Ed): Add collision tracking
return
})
kt.count += 1
kt.slots_filled += 1
slot_offset := info.slot_width * index
entry := table.slots[info.slot_width * index:]
mem_copy_non_overlapping(entry, hash, size_of(TypeHash))
mem_copy_non_overlapping(entry[size_of(hash):], key, info.key_width)
mem_copy_non_overlapping(entry[size_of(hash) + size_of(key):], value, info.value_width)
return entry
}
// Will not expand table if capacity reached, user must do that check beforehand.
// Will override if hash exists
kt_byte_set :: proc()
{
}
kt_remove :: proc()
{
}
kt_byte_contains :: proc()
{
}
kt_byte_find_pointer :: proc()
{
}
kt_find :: proc()
{
}
kt_find_multiple :: proc()
{
}
kt_next_power_of_two :: #force_inline proc(x: int) -> int { power := 1; for ;x > power; do power += power; return power }

View File

@@ -17,4 +17,3 @@ sll_queue_push_nz :: proc "contextless" (first: ^$ParentType, last, n: ^^$Type,
}
}
sll_queue_push_n :: #force_inline proc "contextless" (first: $ParentType, last, n: ^^$Type) { sll_queue_push_nz(first, last, n, nil) }

View File

@@ -2,6 +2,9 @@ package grime
import core_log "core:log"
// TODO(Ed): This logger doesn't support multi-threading.
// TODO(Ed): Look into Lottes's wait-free logger.
Max_Logger_Message_Width :: 160
LoggerEntry :: struct {
@@ -42,26 +45,10 @@ logger_init :: proc( logger : ^ Logger, id : string, file_path : string, file :
LOGGER_VARENA_BASE_ADDRESS : uintptr = 2 * Tera
@static vmem_init_counter : uintptr = 0
// alloc_error : AllocatorError
// logger.varena, alloc_error = varena_init(
// LOGGER_VARENA_BASE_ADDRESS + vmem_init_counter * 250 * Megabyte,
// 1 * Megabyte,
// 128 * Kilobyte,
// growth_policy = nil,
// allow_any_resize = true,
// dbg_name = "logger varena",
// enable_mem_tracking = false )
// verify( alloc_error == .None, "Failed to allocate logger's virtual arena")
vmem_init_counter += 1
// TODO(Ed): Figure out another solution here...
// logger.entries, alloc_error = array_init(Array(LoggerEntry), 8192, runtime.heap_allocator())
// verify( alloc_error == .None, "Failed to allocate logger's entries array")
context.logger = { logger_interface, logger, LoggerLevel.Debug, Default_File_Logger_Opts }
log("Initialized Logger")
log_print("Initialized Logger")
when false {
log("This sentence is over 80 characters long on purpose to test the ability of this logger to properfly wrap long as logs with a new line and then at the end of that pad it with the appropraite signature.")
log_print("This sentence is over 80 characters long on purpose to test the ability of this logger to properfly wrap long as logs with a new line and then at the end of that pad it with the appropraite signature.")
}
}
@@ -137,24 +124,13 @@ logger_interface :: proc(
str_pfmt_file_ln( logger.file, to_string(builder) )
}
// This buffer is used below excluisvely to prevent any allocator recusion when verbose logging from allocators.
// This means a single line is limited to 32k buffer (increase naturally if this SOMEHOW becomes a bottleneck...)
Logger_Allocator_Buffer : [32 * Kilo]u8
// Below are made on demand per-package.
// They should strict only use a scratch allocator...
log :: proc( msg : string, level := LoggerLevel.Info, loc := #caller_location ) {
// TODO(Ed): Finish this
// temp_arena : Arena; arena_init(& temp_arena, Logger_Allocator_Buffer[:])
// context.allocator = arena_allocator(& temp_arena)
// context.temp_allocator = arena_allocator(& temp_arena)
// core_log.log( level, msg, location = loc )
log_print :: proc( msg : string, level := LoggerLevel.Info, loc := #caller_location ) {
core_log.log( level, msg, location = loc )
}
log_fmt :: proc( fmt : string, args : ..any, level := LoggerLevel.Info, loc := #caller_location ) {
// TODO(Ed): Finish this
// temp_arena : Arena; arena_init(& temp_arena, Logger_Allocator_Buffer[:])
// context.allocator = arena_allocator(& temp_arena)
// context.temp_allocator = arena_allocator(& temp_arena)
// core_log.logf( level, fmt, ..args, location = loc )
log_print_fmt :: proc( fmt : string, args : ..any, level := LoggerLevel.Info, loc := #caller_location ) {
core_log.logf( level, fmt, ..args, location = loc )
}

View File

@@ -5,11 +5,41 @@ Mega :: Kilo * 1024
Giga :: Mega * 1024
Tera :: Giga * 1024
// Provides the nearest prime number value for the given capacity
closest_prime :: proc(capacity: uint) -> uint
{
prime_table : []uint = {
53, 97, 193, 389, 769, 1543, 3079, 6151, 12289, 24593,
49157, 98317, 196613, 393241, 786433, 1572869, 3145739,
6291469, 12582917, 25165843, 50331653, 100663319,
201326611, 402653189, 805306457, 1610612741, 3221225473, 6442450941
};
for slot in prime_table {
if slot >= capacity {
return slot
}
}
return prime_table[len(prime_table) - 1]
}
raw_cursor :: #force_inline proc "contextless" (ptr: rawptr) -> [^]byte { return transmute([^]byte) ptr }
ptr_cursor :: #force_inline proc "contextless" (ptr: ^$Type) -> [^]Type { return transmute([^]Type) ptr }
memory_zero_explicit :: #force_inline proc "contextless" (data: rawptr, len: int) -> rawptr {
mem_zero_volatile(data, len) // Use the volatile mem_zero
atomic_thread_fence(.Seq_Cst) // Prevent reordering
@(require_results) is_power_of_two :: #force_inline proc "contextless" (x: uintptr) -> bool { return (x > 0) && ((x & (x-1)) == 0) }
@(require_results)
align_pow2_uint :: #force_inline proc "contextless" (ptr, align: uint) -> uint {
assert_contextless(is_power_of_two(uintptr(align)))
return ptr & ~(align-1)
}
@(require_results)
align_pow2 :: #force_inline proc "contextless" (ptr, align: int) -> int {
assert_contextless(is_power_of_two(uintptr(align)))
return ptr & ~(align-1)
}
sync_mem_zero :: #force_inline proc "contextless" (data: rawptr, len: int) -> rawptr {
mem_zero_volatile(data, len) // Use the volatile mem_zero
sync_fence(.Seq_Cst) // Prevent reordering
return data
}
@@ -23,23 +53,30 @@ SliceRaw :: struct ($Type: typeid) {
}
slice :: #force_inline proc "contextless" (s: [^] $Type, num: $Some_Integer) -> [ ]Type { return transmute([]Type) SliceRaw(Type) { s, cast(int) num } }
slice_cursor :: #force_inline proc "contextless" (s: []$Type) -> [^]Type { return transmute([^]Type) raw_data(s) }
slice_assert :: #force_inline proc (s: $SliceType / []$Type) {
assert(len(s) > 0)
assert(s != nil)
slice_assert :: #force_inline proc "contextless" (s: $SliceType / []$Type) {
assert_contextless(len(s) > 0)
assert_contextless(s != nil)
}
slice_end :: #force_inline proc "contextless" (s : $SliceType / []$Type) -> ^Type { return cursor(s)[len(s):] }
slice_byte_end :: #force_inline proc "contextless" (s : SliceByte) -> ^byte { return s.data[s.len:] }
slice_zero :: #force_inline proc "contextless" (s: $SliceType / []$Type) {
assert_contextless(len(s) > 0)
mem_zero(raw_data(s), size_of(Type) * len(s))
}
slice_copy :: #force_inline proc "contextless" (dst, src: $SliceType / []$Type) -> int {
n := max(0, min(len(dst), len(src)))
if n > 0 {
mem_copy(raw_data(dst), raw_data(src), n * size_of(Type))
}
assert_contextless(n > 0)
mem_copy(raw_data(dst), raw_data(src), n * size_of(Type))
return n
}
@(require_results) slice_to_bytes :: #force_inline proc "contextless" (s: []$Type) -> []byte { return ([^]byte)(raw_data(s))[:len(s) * size_of(Type)] }
@(require_results) slice_raw :: #force_inline proc "contextless" (s: []$Type) -> SliceRaw(Type) { return transmute(SliceRaw(Type)) s }
slice_fill :: #force_inline proc "contextless" (s: $SliceType / []$Type, value: Type) { memory_fill(cursor(s), value, len(s)) }
@(require_results) slice_to_bytes :: #force_inline proc "contextless" (s: []$Type) -> []byte { return ([^]byte)(raw_data(s))[:len(s) * size_of(Type)] }
@(require_results) slice_raw :: #force_inline proc "contextless" (s: []$Type) -> SliceRaw(Type) { return transmute(SliceRaw(Type)) s }
@(require_results) type_to_bytes :: #force_inline proc "contextless" (obj: ^$Type) -> []byte { return ([^]byte)(obj)[:size_of(Type)] }
//region Memory Math
@@ -72,44 +109,39 @@ calc_padding_with_header :: proc "contextless" (pointer: uintptr, alignment: uin
}
// Helper to get the the beginning of memory after a slice
memory_after :: #force_inline proc "contextless" ( s: []byte ) -> ( ^ byte) {
@(require_results)
memory_after :: #force_inline proc "contextless" (s: []byte ) -> (^byte) {
return cursor(s)[len(s):]
}
memory_after_header :: #force_inline proc "contextless" ( header : ^($ Type) ) -> ( [^]byte) {
memory_after_header :: #force_inline proc "contextless" (header: ^($Type)) -> ([^]byte) {
result := cast( [^]byte) ptr_offset( header, 1 )
// result := cast( [^]byte) (cast( [^]Type) header)[ 1:]
return result
}
@(require_results)
memory_align_formula :: #force_inline proc "contextless" ( size, align : uint) -> uint {
memory_align_formula :: #force_inline proc "contextless" (size, align: uint) -> uint {
result := size + align - 1
return result - result % align
}
// This is here just for docs
memory_misalignment :: #force_inline proc ( address, alignment : uintptr) -> uint {
memory_misalignment :: #force_inline proc "contextless" (address, alignment: uintptr) -> uint {
// address % alignment
assert(is_power_of_two(alignment))
assert_contextless(is_power_of_two(alignment))
return uint( address & (alignment - 1) )
}
// This is here just for docs
@(require_results)
memory_aign_forward :: #force_inline proc( address, alignment : uintptr) -> uintptr
memory_aign_forward :: #force_inline proc "contextless" (address, alignment : uintptr) -> uintptr
{
assert(is_power_of_two(alignment))
assert_contextless(is_power_of_two(alignment))
aligned_address := address
misalignment := cast(uintptr) memory_misalignment( address, alignment )
misalignment := transmute(uintptr) memory_misalignment( address, alignment )
if misalignment != 0 {
aligned_address += alignment - misalignment
}
return aligned_address
}
// align_up :: proc(address: uintptr, alignment: uintptr) -> uintptr {
// return (address + alignment - 1) & ~(alignment - 1)
// }

View File

@@ -0,0 +1,114 @@
/*
This was a tracking allocator made to kill off various bugs left with grime's pool & slab allocators
It doesn't perform that well on a per-frame basis and should be avoided for general memory debugging
It only makes sure that memory allocations don't collide in the allocator and deallocations don't occur for memory never allocated.
I'm keeping it around as an artifact & for future allocators I may make.
NOTE(Ed): Perfer sanitizers
*/
package grime
MemoryTrackerEntry :: struct {
start, end : rawptr,
}
MemoryTracker :: struct {
parent : ^MemoryTracker,
name : string,
entries : Array(MemoryTrackerEntry),
}
Track_Memory :: false
@(disabled = Track_Memory == false)
memtracker_clear :: proc (tracker: MemoryTracker) {
log_print_fmt("Clearing tracker: %v", tracker.name)
memtracker_dump_entries(tracker);
array_clear(tracker.entries)
}
@(disabled = Track_Memory == false)
memtracker_init :: proc (tracker: ^MemoryTracker, allocator: Odin_Allocator, num_entries: int, name: string) {
tracker.name = name
error: AllocatorError
tracker.entries, error = make( Array(MemoryTrackerEntry), num_entries, dbg_name = name, allocator = allocator )
if error != AllocatorError.None do fatal("Failed to allocate memory tracker's hashmap");
}
@(disabled = Track_Memory == false)
memtracker_register :: proc(tracker: ^MemoryTracker, new_entry: MemoryTrackerEntry )
{
profile(#procedure)
if tracker.entries.num == tracker.entries.capacity {
ensure(false, "Memory tracker entries array full, can no longer register any more allocations")
return
}
for idx in 0..< tracker.entries.num
{
entry := & tracker.entries.data[idx]
if new_entry.start > entry.start do continue
if (entry.end < new_entry.start) {
msg := str_pfmt("Detected a collision:\nold_entry: %v -> %v\nnew_entry: %v -> %v | %v", entry.start, entry.end, new_entry.start, new_entry.end, tracker.name )
ensure( false, msg )
memtracker_dump_entries(tracker ^)
}
array_append_at(& tracker.entries, new_entry, idx)
log_print_fmt("Registered: %v -> %v | %v", new_entry.start, new_entry.end, tracker.name)
return
}
array_append( & tracker.entries, new_entry )
log_print_fmt("Registered: %v -> %v | %v", new_entry.start, new_entry.end, tracker.name )
}
@(disabled = Track_Memory == false)
memtracker_register_auto_name :: #force_inline proc(tracker: ^MemoryTracker, start, end: rawptr) {
memtracker_register( tracker, {start, end})
}
@(disabled = Track_Memory == false)
memtracker_register_auto_name_slice :: #force_inline proc( tracker : ^MemoryTracker, slice : []byte ) {
memtracker_register( tracker, { raw_data(slice), transmute(rawptr) & cursor(slice)[len(slice) - 1] })
}
@(disabled = Track_Memory == false)
memtracker_unregister :: proc( tracker : MemoryTracker, to_remove : MemoryTrackerEntry )
{
profile(#procedure)
entries := array_to_slice(tracker.entries)
for idx in 0..< tracker.entries.num
{
entry := & entries[idx]
if entry.start == to_remove.start {
if (entry.end == to_remove.end || to_remove.end == nil) {
log_print_fmt("Unregistered: %v -> %v | %v", to_remove.start, to_remove.end, tracker.name );
array_remove_at(tracker.entries, idx)
return
}
ensure(false, str_pfmt_tmp("Found an entry with the same start address but end address was different:\nentry : %v -> %v\nto_remove: %v -> %v | %v", entry.start, entry.end, to_remove.start, to_remove.end, tracker.name ))
memtracker_dump_entries(tracker)
}
}
ensure(false, str_pfmt_tmp("Attempted to unregister an entry that was not tracked: %v -> %v | %v", to_remove.start, to_remove.end, tracker.name))
memtracker_dump_entries(tracker)
}
@(disabled = Track_Memory == false)
memtracker_check_for_collisions :: proc ( tracker : MemoryTracker )
{
profile(#procedure)
// entries := array_to_slice(tracker.entries)
for idx in 1 ..< tracker.entries.num {
// Check to make sure each allocations adjacent entries do not intersect
left := & tracker.entries.data[idx - 1]
right := & tracker.entries.data[idx]
collided := left.start > right.start || left.end > right.end
if collided {
msg := str_pfmt_tmp("Memory tracker detected a collision:\nleft: %v\nright: %v | %v", left, right, tracker.name )
memtracker_dump_entries(tracker)
}
}
}
@(disabled = Track_Memory == false)
memtracker_dump_entries :: proc( tracker : MemoryTracker ) {
log_print( "Dumping Memory Tracker:")
for idx in 0 ..< tracker.entries.num {
entry := & tracker.entries.data[idx]
log_print_fmt("%v -> %v", entry.start, entry.end)
}
}

View File

@@ -6,13 +6,14 @@ import "base:builtin"
import "base:intrinsics"
atomic_thread_fence :: intrinsics.atomic_thread_fence
mem_zero_volatile :: intrinsics.mem_zero_volatile
add_overflow :: intrinsics.overflow_add
// mem_zero :: intrinsics.mem_zero
// mem_copy :: intrinsics.mem_copy_non_overlapping
// mem_copy_overlapping :: intrinsics.mem_copy
mem_zero :: #force_inline proc "contextless" (data: rawptr, len: int) { intrinsics.mem_zero (data, len) }
mem_copy_non_overlapping :: #force_inline proc "contextless" (dst, src: rawptr, len: int) { intrinsics.mem_copy_non_overlapping(dst, src, len) }
mem_copy :: #force_inline proc "contextless" (dst, src: rawptr, len: int) { intrinsics.mem_copy (dst, src, len) }
mem_zero :: #force_inline proc "contextless" (data: rawptr, len: int) { intrinsics.mem_zero (data, len) }
mem_copy :: #force_inline proc "contextless" (dst, src: rawptr, len: int) { intrinsics.mem_copy_non_overlapping(dst, src, len) }
mem_copy_overlapping :: #force_inline proc "contextless" (dst, src: rawptr, len: int) { intrinsics.mem_copy (dst, src, len) }
import "base:runtime"
Assertion_Failure_Proc :: runtime.Assertion_Failure_Proc
@@ -21,71 +22,91 @@ import "base:runtime"
LoggerLevel :: runtime.Logger_Level
LoggerOptions :: runtime.Logger_Options
Random_Generator :: runtime.Random_Generator
SourceCodeLocation :: runtime.Source_Code_Location
slice_copy_overlapping :: runtime.copy_slice
import fmt_io "core:fmt"
// % based template formatters
str_pfmt_out :: fmt_io.printf
str_pfmt_tmp :: #force_inline proc(fmt: string, args: ..any, newline := false) -> string { context.temp_allocator = odin_ainfo_giftwrap(context.temp_allocator); return fmt_io.tprintf(fmt, ..args, newline = newline) }
str_pfmt :: fmt_io.aprintf // Decided to make aprintf the default. (It will always be the default allocator)
str_pfmt_tmp :: #force_inline proc(fmt: string, args: ..any, newline := false) -> string { context.temp_allocator = resolve_odin_allocator(context.temp_allocator); return fmt_io.tprintf(fmt, ..args, newline = newline) }
str_pfmt :: #force_inline proc(fmt: string, args: ..any, allocator := context.allocator, newline := false) -> string { return fmt_io.aprintf(fmt, ..args, newline = newline, allocator = resolve_odin_allocator(allocator)) }
str_pfmt_builder :: fmt_io.sbprintf
str_pfmt_buffer :: fmt_io.bprintf
str_pfmt_file_ln :: fmt_io.fprintln
str_tmp_from_any :: fmt_io.tprint
str_tmp_from_any :: #force_inline proc(args: ..any, sep := " ") -> string { context.temp_allocator = resolve_odin_allocator(context.temp_allocator); return fmt_io.tprint(..args, sep = sep) }
import "core:log"
Default_File_Logger_Opts :: log.Default_File_Logger_Opts
Logger_Full_Timestamp_Opts :: log.Full_Timestamp_Opts
import "core:mem"
Odin_AllocatorMode :: mem.Allocator_Mode
Odin_AllocatorProc :: mem.Allocator_Proc
DEFAULT_ALIGNMENT :: mem.DEFAULT_ALIGNMENT
DEFAULT_PAGE_SIZE :: mem.DEFAULT_PAGE_SIZE
Odin_Allocator :: mem.Allocator
Odin_AllocatorQueryInfo :: mem.Allocator_Query_Info
Odin_AllocatorError :: mem.Allocator_Error
Odin_AllocatorQueryInfo :: mem.Allocator_Query_Info
Odin_AllocatorMode :: mem.Allocator_Mode
Odin_AllocatorModeSet :: mem.Allocator_Mode_Set
Odin_AllocatorProc :: mem.Allocator_Proc
align_forward_int :: mem.align_forward_int
align_forward_uintptr :: mem.align_backward_uintptr
align_forward_raw :: mem.align_forward
is_power_of_two :: mem.is_power_of_two
align_pow2 :: mem.align_forward_int
mem_fill :: mem.set
import "core:mem/virtual"
VirtualProtectFlags :: virtual.Protect_Flags
import core_os "core:os"
FS_Open_Readonly :: core_os.O_RDONLY
FS_Open_Writeonly :: core_os.O_WRONLY
FS_Open_Create :: core_os.O_CREATE
FS_Open_Trunc :: core_os.O_TRUNC
import "core:os"
FS_Open_Readonly :: os.O_RDONLY
FS_Open_Writeonly :: os.O_WRONLY
FS_Open_Create :: os.O_CREATE
FS_Open_Trunc :: os.O_TRUNC
OS_ERROR_NONE :: core_os.ERROR_NONE
OS_Handle :: core_os.Handle
OS_ERROR_HANDLE_EOF :: core_os.ERROR_HANDLE_EOF
OS_INVALID_HANDLE :: core_os.INVALID_HANDLE
OS_ERROR_NONE :: os.ERROR_NONE
OS_Handle :: os.Handle
OS_ERROR_HANDLE_EOF :: os.ERROR_HANDLE_EOF
OS_INVALID_HANDLE :: os.INVALID_HANDLE
FileFlag_Create :: core_os.O_CREATE
FileFlag_ReadWrite :: core_os.O_RDWR
FileTime :: core_os.File_Time
file_close :: core_os.close
file_open :: core_os.open
file_read :: core_os.read
file_remove :: core_os.remove
file_seek :: core_os.seek
file_status :: core_os.stat
file_truncate :: core_os.truncate
file_write :: core_os.write
process_exit :: os.exit
file_read_entire :: core_os.read_entire_file
file_write_entire :: core_os.write_entire_file
FileFlag_Create :: os.O_CREATE
FileFlag_ReadWrite :: os.O_RDWR
FileTime :: os.File_Time
file_close :: os.close
file_open :: os.open
file_read :: os.read
file_remove :: os.remove
file_seek :: os.seek
file_status :: os.stat
file_truncate :: os.truncate
file_write :: os.write
file_read_entire_from_filename :: #force_inline proc(name: string, allocator := context.allocator, loc := #caller_location) -> ([]byte, bool) { return os.read_entire_file_from_filename(name, resolve_odin_allocator(allocator), loc) }
file_write_entire :: os.write_entire_file
file_read_entire :: proc {
file_read_entire_from_filename,
}
import "core:strings"
StrBuilder :: strings.Builder
strbuilder_from_bytes :: strings.builder_from_bytes
import "core:slice"
slice_zero :: slice.zero
import "core:prof/spall"
Spall_Context :: spall.Context
Spall_Buffer :: spall.Buffer
import "core:sync"
Mutex :: sync.Mutex
sync_fence :: sync.atomic_thread_fence
sync_load :: sync.atomic_load_explicit
sync_store :: sync.atomic_store_explicit
import "core:thread"
SysThread :: thread.Thread
import "core:time"
TIME_IS_SUPPORTED :: time.IS_SUPPORTED
@@ -98,35 +119,55 @@ import "core:unicode/utf8"
runes_to_string :: utf8.runes_to_string
// string_to_runes :: utf8.string_to_runes
array_append :: proc {
array_append_value,
array_append_array,
array_append_slice,
}
array_append_at :: proc {
// array_append_at_array,
array_append_at_slice,
array_append_at_value,
}
cursor :: proc {
raw_cursor,
ptr_cursor,
slice_cursor,
string_cursor,
}
end :: proc {
slice_end,
slice_byte_end,
string_end,
}
to_string :: proc {
strings.to_string,
}
copy :: proc {
mem_copy,
slice_copy,
}
copy_non_overlaping :: proc {
mem_copy_non_overlapping,
copy_overlapping :: proc {
mem_copy_overlapping,
slice_copy_overlapping,
}
fill :: proc {
mem_fill,
slice_fill,
}
iterator :: proc {
iterator_ringbuf_fixed,
}
make :: proc {
array_init,
}
peek_back :: proc {
ringbuf_fixed_peak_back,
}
to_bytes :: proc {
slice_to_bytes,
type_to_bytes,
}
to_string :: proc {
strings.to_string,
}
zero :: proc {
mem_zero,
slice_zero,

30
code2/grime/profiler.odin Normal file
View File

@@ -0,0 +1,30 @@
package grime
import "core:prof/spall"
/*
This is just a snippet file, do not use directly.
*/
set_profiler_module_context :: #force_inline proc "contextless" (profiler : ^Spall_Context) {
sync_store(& grime_memory.spall_context, profiler, .Release)
}
set_profiler_thread_buffer :: #force_inline proc "contextless" (buffer: ^Spall_Buffer) {
sync_store(& grime_thread.spall_buffer, buffer, .Release)
}
DISABLE_PROFILING :: true
@(deferred_none = profile_end, disabled = DISABLE_PROFILING)
profile :: #force_inline proc "contextless" ( name : string, loc := #caller_location ) {
spall._buffer_begin( grime_memory.spall_context, grime_thread.spall_buffer, name, "", loc )
}
@(disabled = DISABLE_PROFILING)
profile_begin :: #force_inline proc "contextless" ( name : string, loc := #caller_location ) {
spall._buffer_begin( grime_memory.spall_context, grime_thread.spall_buffer, name, "", loc )
}
@(disabled = DISABLE_PROFILING)
profile_end :: #force_inline proc "contextless" () {
spall._buffer_end( grime_memory.spall_context, grime_thread.spall_buffer)
}

View File

@@ -1,168 +0,0 @@
package grime
RingBufferFixed :: struct( $Type: typeid, $Size: u32 ) {
head : u32,
tail : u32,
num : u32,
items : [Size] Type,
}
ringbuf_fixed_clear :: #force_inline proc "contextless" ( using buffer : ^RingBufferFixed( $Type, $Size)) {
head = 0
tail = 0
num = 0
}
ringbuf_fixed_is_full :: #force_inline proc "contextless" ( using buffer : RingBufferFixed( $Type, $Size)) -> bool {
return num == Size
}
ringbuf_fixed_is_empty :: #force_inline proc "contextless" ( using buffer : RingBufferFixed( $Type, $Size)) -> bool {
return num == 0
}
ringbuf_fixed_peek_front_ref :: #force_inline proc "contextless" ( using buffer : ^RingBufferFixed( $Type, $Size)) -> ^Type {
assert(num > 0, "Attempted to peek an empty ring buffer")
return & items[ head ]
}
ringbuf_fixed_peek_front :: #force_inline proc "contextless" ( using buffer : RingBufferFixed( $Type, $Size)) -> Type {
assert(num > 0, "Attempted to peek an empty ring buffer")
return items[ head ]
}
ringbuf_fixed_peak_back :: #force_inline proc ( using buffer : RingBufferFixed( $Type, $Size)) -> Type {
assert(num > 0, "Attempted to peek an empty ring buffer")
buf_size := u32(Size)
index := (tail - 1 + buf_size) % buf_size
return items[ index ]
}
ringbuf_fixed_push :: #force_inline proc(using buffer: ^RingBufferFixed($Type, $Size), value: Type) {
if num == Size do head = (head + 1) % Size
else do num += 1
items[ tail ] = value
tail = (tail + 1) % Size
}
ringbuf_fixed_push_slice :: proc(buffer: ^RingBufferFixed($Type, $Size), slice: []Type) -> u32
{
size := u32(Size)
slice_size := u32(len(slice))
// assert( slice_size <= size, "Attempting to append a slice that is larger than the ring buffer!" )
if slice_size == 0 do return 0
items_to_add := min( slice_size, size)
items_added : u32 = 0
if items_to_add > Size - buffer.num
{
// Some or all existing items will be overwritten
overwrite_count := items_to_add - (Size - buffer.num)
buffer.head = (buffer.head + overwrite_count) % size
buffer.num = size
}
else
{
buffer.num += items_to_add
}
if items_to_add <= size
{
// Case 1: Slice fits entirely or partially in the buffer
space_to_end := size - buffer.tail
first_chunk := min(items_to_add, space_to_end)
// First copy: from tail to end of buffer
copy( buffer.items[ buffer.tail: ] , slice[ :first_chunk ] )
if first_chunk < items_to_add {
// Second copy: wrap around to start of buffer
second_chunk := items_to_add - first_chunk
copy( buffer.items[:], slice[ first_chunk : items_to_add ] )
}
buffer.tail = (buffer.tail + items_to_add) % Size
items_added = items_to_add
}
else
{
// Case 2: Slice is larger than buffer, only keep last Size elements
to_add := slice[ slice_size - size: ]
// First copy: from start of buffer to end
first_chunk := min(Size, u32(len(to_add)))
copy( buffer.items[:], to_add[ :first_chunk ] )
if first_chunk < Size
{
// Second copy: wrap around
copy( buffer.items[ first_chunk: ], to_add[ first_chunk: ] )
}
buffer.head = 0
buffer.tail = 0
buffer.num = Size
items_added = Size
}
return items_added
}
ringbuf_fixed_pop :: #force_inline proc "contextless" ( using buffer : ^RingBufferFixed( $Type, $Size )) -> Type {
assert(num > 0, "Attempted to pop an empty ring buffer")
value := items[ head ]
head = ( head + 1 ) % Size
num -= 1
return value
}
RingBufferFixedIterator :: struct( $Type : typeid) {
items : []Type,
head : u32,
tail : u32,
index : u32,
remaining : u32,
}
iterator_ringbuf_fixed :: proc(buffer: ^RingBufferFixed($Type, $Size)) -> RingBufferFixedIterator(Type)
{
iter := RingBufferFixedIterator(Type){
items = buffer.items[:],
head = buffer.head,
tail = buffer.tail,
remaining = buffer.num,
}
buff_size := u32(Size)
if buffer.num > 0 {
// Start from the last pushed item (one before tail)
iter.index = (buffer.tail - 1 + buff_size) % buff_size
} else {
iter.index = buffer.tail // This will not be used as remaining is 0
}
return iter
}
next_ringbuf_fixed_iterator :: proc(iter : ^RingBufferFixedIterator( $Type)) -> ^Type
{
using iter
if remaining == 0 {
return nil // If there are no items left to iterate over
}
buf_size := cast(u32) len(items)
result := &items[index]
// Decrement index and wrap around if necessary
index = (index - 1 + buf_size) % buf_size
remaining -= 1
return result
}

View File

@@ -0,0 +1,11 @@
package grime
@(private) grime_memory: StaticMemory
@(private, thread_local) grime_thread: ThreadMemory
StaticMemory :: struct {
spall_context: ^Spall_Context,
}
ThreadMemory :: struct {
spall_buffer: ^Spall_Buffer,
}

View File

@@ -4,7 +4,17 @@ Raw_String :: struct {
data: [^]byte,
len: int,
}
string_cursor :: proc(s: string) -> [^]u8 { return slice_cursor(transmute([]byte) s) }
string_copy :: proc(dst, src: string) { slice_copy (transmute([]byte) dst, transmute([]byte) src) }
string_end :: proc(s: string) -> ^u8 { return slice_end (transmute([]byte) s) }
string_assert :: proc(s: string) { slice_assert(transmute([]byte) s) }
string_cursor :: #force_inline proc "contextless" (s: string) -> [^]u8 { return slice_cursor(transmute([]byte) s) }
string_copy :: #force_inline proc "contextless" (dst, src: string) { slice_copy (transmute([]byte) dst, transmute([]byte) src) }
string_end :: #force_inline proc "contextless" (s: string) -> ^u8 { return slice_end (transmute([]byte) s) }
string_assert :: #force_inline proc "contextless" (s: string) { slice_assert(transmute([]byte) s) }
str_to_cstr_capped :: proc(content: string, mem: []byte) -> cstring {
copy_len := min(len(content), len(mem) - 1)
if copy_len > 0 do copy(mem[:copy_len], transmute([]byte) content)
mem[copy_len] = 0
return transmute(cstring) raw_data(mem)
}
cstr_len_capped :: #force_inline proc "contextless" (content: cstring, cap: int) -> (len: int) { for len = 0; (len <= cap) && (transmute([^]byte)content)[len] != 0; len += 1 {} return }
cstr_to_str_capped :: #force_inline proc "contextless" (content: cstring, mem: []byte) -> string { return transmute(string) Raw_String { cursor(mem), cstr_len_capped (content, len(mem)) } }

View File

@@ -0,0 +1,30 @@
package grime
StrKey_U4 :: struct {
len: u32, // Length of string
offset: u32, // Offset in varena
}
StrKT_U4_Cell_Depth :: 4
StrKT_U4_Slot :: KTCX_Slot(StrKey_U4)
StrKT_U4_Cell :: KTCX_Cell(StrKT_U4_Slot, 4)
StrKT_U4_Table :: KTCX(StrKT_U4_Cell)
VStrKT_U4 :: struct {
varena: VArena, // Backed by growing vmem
kt: StrKT_U4_Table,
}
vstrkt_u4_init :: proc(varena: ^VArena, capacity: int, cache: ^VStrKT_U4)
{
capacity := cast(int) closest_prime(cast(uint) capacity)
ktcx_init(capacity, varena_allocator(varena), &cache.kt)
return
}
vstrkt_u4_intern :: proc(cache: ^VStrKT_U4) -> StrKey_U4
{
// profile(#procedure)
return {}
}

View File

@@ -19,7 +19,7 @@ thread__highres_wait :: proc( desired_ms : f64, loc := #caller_location ) -> b32
timer := win32.CreateWaitableTimerExW( nil, nil, win32.CREATE_WAITABLE_TIMER_HIGH_RESOLUTION, win32.TIMER_ALL_ACCESS )
if timer == nil {
msg := str_pfmt("Failed to create win32 timer - ErrorCode: %v", win32.GetLastError() )
log( msg, LoggerLevel.Warning, loc)
log_print( msg, LoggerLevel.Warning, loc)
return false
}
@@ -27,7 +27,7 @@ thread__highres_wait :: proc( desired_ms : f64, loc := #caller_location ) -> b32
result := win32.SetWaitableTimerEx( timer, & due_time, 0, nil, nil, nil, 0 )
if ! result {
msg := str_pfmt("Failed to set win32 timer - ErrorCode: %v", win32.GetLastError() )
log( msg, LoggerLevel.Warning, loc)
log_print( msg, LoggerLevel.Warning, loc)
return false
}
@@ -42,22 +42,22 @@ thread__highres_wait :: proc( desired_ms : f64, loc := #caller_location ) -> b32
{
case WAIT_ABANDONED:
msg := str_pfmt("Failed to wait for win32 timer - Error: WAIT_ABANDONED" )
log( msg, LoggerLevel.Error, loc)
log_print( msg, LoggerLevel.Error, loc)
return false
case WAIT_IO_COMPLETION:
msg := str_pfmt("Waited for win32 timer: Ended by APC queued to the thread" )
log( msg, LoggerLevel.Error, loc)
log_print( msg, LoggerLevel.Error, loc)
return false
case WAIT_OBJECT_0:
msg := str_pfmt("Waited for win32 timer- Reason : WAIT_OBJECT_0" )
log( msg, loc = loc)
log_print( msg, loc = loc)
return false
case WAIT_FAILED:
msg := str_pfmt("Waited for win32 timer failed - ErrorCode: $v", win32.GetLastError() )
log( msg, LoggerLevel.Error, loc)
log_print( msg, LoggerLevel.Error, loc)
return false
}

View File

@@ -0,0 +1,279 @@
package grime
// TODO(Ed): Review this
import "base:runtime"
// TODO(Ed): Support address sanitizer
/*
So this is a virtual memory backed arena allocator designed
to take advantage of one large contigous reserve of memory.
With the expectation that resizes with its interface will only occur using the last allocated block.
Note(Ed): Odin's mem allocator now has that feature
All virtual address space memory for this application is managed by a virtual arena.
No other part of the program will directly touch the vitual memory interface direclty other than it.
Thus for the scope of this prototype the Virtual Arena are the only interfaces to dynamic address spaces for the runtime of the client app.
The host application as well ideally (although this may not be the case for a while)
*/
VArenaFlags :: bit_set[VArenaFlag; u32]
VArenaFlag :: enum u32 {
No_Large_Pages,
}
VArena :: struct {
using vmem: VirtualMemoryRegion,
commit_size: int,
commit_used: int,
flags: VArenaFlags,
}
// Default growth_policy is varena_default_growth_policy
varena_make :: proc(to_reserve, commit_size: int, base_address: uintptr, flags: VArenaFlags = {}
) -> (arena: ^VArena, alloc_error: AllocatorError)
{
page_size := virtual_get_page_size()
verify( page_size > size_of(VirtualMemoryRegion), "Make sure page size is not smaller than a VirtualMemoryRegion?")
verify( to_reserve >= page_size, "Attempted to reserve less than a page size" )
verify( commit_size >= page_size, "Attempted to commit less than a page size")
verify( to_reserve >= commit_size, "Attempted to commit more than there is to reserve" )
vmem : VirtualMemoryRegion
vmem, alloc_error = virtual_reserve_and_commit( base_address, uint(to_reserve), uint(commit_size) )
if ensure(vmem.base_address != nil && alloc_error == .None, "Failed to allocate requested virtual memory for virtual arena") {
return
}
arena = transmute(^VArena) vmem.base_address;
arena.vmem = vmem
arena.commit_used = align_pow2(size_of(arena), DEFAULT_ALIGNMENT)
arena.flags = flags
return
}
varena_alloc :: proc(self: ^VArena,
size: int,
alignment: int = DEFAULT_ALIGNMENT,
zero_memory := true,
location := #caller_location
) -> (data: []byte, alloc_error: AllocatorError)
{
verify( alignment & (alignment - 1) == 0, "Non-power of two alignment", location = location )
page_size := uint(virtual_get_page_size())
requested_size := uint(size)
if ensure(requested_size > 0, "Requested 0 size") do return nil, .Invalid_Argument
// ensure( requested_size > page_size, "Requested less than a page size, going to allocate a page size")
// requested_size = max(requested_size, page_size)
// TODO(Ed): Prevent multiple threads from entering here extrusively?
// sync.mutex_guard( & mutex )
commit_used := uint(self.commit_used)
reserved := uint(self.reserved)
commit_size := uint(self.commit_size)
alignment_offset := uint(0)
current_offset := uintptr(self.reserve_start) + uintptr(self.commit_used)
mask := uintptr(alignment - 1)
if (current_offset & mask != 0) do alignment_offset = uint(alignment) - uint(current_offset & mask)
size_to_allocate, overflow_signal := add_overflow( requested_size, alignment_offset )
if overflow_signal do return {}, .Out_Of_Memory
to_be_used : uint
to_be_used, overflow_signal = add_overflow( commit_used, size_to_allocate )
if (overflow_signal || to_be_used > reserved) do return {}, .Out_Of_Memory
header_offset := uint( uintptr(self.reserve_start) - uintptr(self.base_address) )
commit_left := self.committed - commit_used - header_offset
needs_more_committed := commit_left < size_to_allocate
if needs_more_committed {
profile("VArena Growing")
next_commit_size := max(to_be_used, commit_size)
alloc_error = virtual_commit( self.vmem, next_commit_size )
if alloc_error != .None do return
}
data_ptr := ([^]byte)(current_offset + uintptr(alignment_offset))
data = slice( data_ptr, requested_size )
commit_used += size_to_allocate
alloc_error = .None
// log_backing: [Kilobyte * 16]byte; backing_slice := log_backing[:]
// log( str_pfmt_buffer( backing_slice, "varena alloc - BASE: %p PTR: %X, SIZE: %d", cast(rawptr) self.base_address, & data[0], requested_size) )
if zero_memory {
// log( str_pfmt_buffer( backing_slice, "Zeroring data (Range: %p to %p)", raw_data(data), cast(rawptr) (uintptr(raw_data(data)) + uintptr(requested_size))))
// zero( data )
mem_zero( data_ptr, int(requested_size) )
}
return
}
varena_grow :: #force_inline proc(self: ^VArena, old_memory: []byte, requested_size: int, alignment: int = DEFAULT_ALIGNMENT, zero_memory := true, loc := #caller_location
) -> (data: []byte, error: AllocatorError)
{
if ensure(old_memory != nil, "Growing without old_memory?") {
data, error = varena_alloc(self, requested_size, alignment, zero_memory, loc)
return
}
if ensure(requested_size != len(old_memory), "Requested grow when none needed") {
data = old_memory
return
}
alignment_offset := uintptr(cursor(old_memory)) & uintptr(alignment - 1)
if ensure(alignment_offset == 0 && requested_size < len(old_memory), "Requested a shrink from varena_grow") {
data = old_memory
return
}
old_memory_offset := cursor(old_memory)[len(old_memory):]
current_offset := self.reserve_start[self.commit_used:]
when false {
if old_size < page_size {
// We're dealing with an allocation that requested less than the minimum allocated on vmem.
// Provide them more of their actual memory
data = slice(transmute([^]byte)old_memory, size )
return
}
}
verify( old_memory_offset == current_offset,
"Cannot grow existing allocation in vitual arena to a larger size unless it was the last allocated" )
if old_memory_offset != current_offset
{
// Give it new memory and copy the old over. Old memory is unrecoverable until clear.
new_region : []byte
new_region, error = varena_alloc( self, requested_size, alignment, zero_memory, loc )
if ensure(new_region != nil && error == .None, "Failed to grab new region") {
data = old_memory
return
}
copy( cursor(new_region), cursor(old_memory), len(old_memory) )
data = new_region
// log_print_fmt("varena resize (new): old: %p %v new: %p %v", old_memory, old_size, (& data[0]), size)
return
}
new_region : []byte
new_region, error = varena_alloc( self, requested_size - len(old_memory), alignment, zero_memory, loc)
if ensure(new_region != nil && error == .None, "Failed to grab new region") {
data = old_memory
return
}
data = slice(cursor(old_memory), requested_size )
// log_print_fmt("varena resize (expanded): old: %p %v new: %p %v", old_memory, old_size, (& data[0]), size)
return
}
varena_shrink :: proc(self: ^VArena, memory: []byte, requested_size: int, loc := #caller_location) -> (data: []byte, error: AllocatorError) {
if requested_size == len(memory) { return memory, .None }
if ensure(memory != nil, "Shrinking without old_memory?") do return memory, .Invalid_Argument
current_offset := self.reserve_start[self.commit_used:]
shrink_amount := len(memory) - requested_size
if shrink_amount < 0 { return memory, .None }
assert(cursor(memory) == current_offset)
self.commit_used -= shrink_amount
return memory[:requested_size], .None
}
varena_reset :: #force_inline proc(self: ^VArena) {
// TODO(Ed): Prevent multiple threads from entering here extrusively?
// sync.mutex_guard( & mutex )
self.commit_used = 0
}
varena_release :: #force_inline proc(self: ^VArena) {
// TODO(Ed): Prevent multiple threads from entering here extrusively?
// sync.mutex_guard( & mutex )
virtual_release( self.vmem )
self.commit_used = 0
}
varena_rewind :: #force_inline proc(arena: ^VArena, save_point: AllocatorSP, loc := #caller_location) {
assert_contextless(save_point.type_sig == varena_allocator_proc)
assert_contextless(save_point.slot >= 0 && save_point.slot <= int(arena.commit_used))
arena.commit_used = save_point.slot
}
varena_save :: #force_inline proc(arena: ^VArena) -> AllocatorSP { return AllocatorSP { type_sig = varena_allocator_proc, slot = cast(int) arena.commit_used }}
varena_allocator_proc :: proc(input: AllocatorProc_In, output: ^AllocatorProc_Out) {
assert(output != nil)
assert(input.data != nil)
arena := transmute(^VArena) input.data
switch input.op {
case .Alloc, .Alloc_NoZero:
output.allocation, output.error = varena_alloc(arena, input.requested_size, input.alignment, input.op == .Alloc, input.loc)
return
case .Free:
output.error = .Mode_Not_Implemented
case .Reset:
varena_reset(arena)
case .Grow, .Grow_NoZero:
output.allocation, output.error = varena_grow(arena, input.old_allocation, input.requested_size, input.alignment, input.op == .Grow, input.loc)
case .Shrink:
output.allocation, output.error = varena_shrink(arena, input.old_allocation, input.requested_size)
case .Rewind:
varena_rewind(arena, input.save_point)
case .SavePoint:
output.save_point = varena_save(arena)
case .Is_Owner:
output.error = .Mode_Not_Implemented
case .Query:
output.features = {.Alloc, .Reset, .Grow, .Shrink, .Rewind, .Actually_Resize, .Hint_Fast_Bump, .Is_Owner}
output.max_alloc = int(arena.reserved) - arena.commit_used
output.min_alloc = 0
output.left = output.max_alloc
output.save_point = varena_save(arena)
case .Startup, .Shutdown, .Thread_Start, .Thread_Stop:
output.error = .Mode_Not_Implemented
}
}
varena_odin_allocator_proc :: proc(
allocator_data : rawptr,
mode : Odin_AllocatorMode,
size : int,
alignment : int,
old_memory : rawptr,
old_size : int,
location : SourceCodeLocation = #caller_location
) -> (data: []byte, error: Odin_AllocatorError)
{
error_: AllocatorError
arena := transmute( ^VArena) allocator_data
page_size := uint(virtual_get_page_size())
switch mode {
case .Alloc, .Alloc_Non_Zeroed:
data, error_ = varena_alloc( arena, size, alignment, (mode == .Alloc), location )
case .Free:
error = .Mode_Not_Implemented
case .Free_All:
varena_reset( arena )
case .Resize, .Resize_Non_Zeroed:
if size > old_size do data, error_ = varena_grow (arena, slice(cursor(old_memory), old_size), size, alignment, (mode == .Alloc), location)
else do data, error_ = varena_shrink(arena, slice(cursor(old_memory), old_size), size, location)
case .Query_Features:
set := cast( ^Odin_AllocatorModeSet) old_memory
if set != nil do (set ^) = {.Alloc, .Alloc_Non_Zeroed, .Free_All, .Resize, .Query_Features}
case .Query_Info:
info := (^Odin_AllocatorQueryInfo)(old_memory)
info.pointer = transmute(rawptr) varena_save(arena).slot
info.size = cast(int) arena.reserved
info.alignment = DEFAULT_ALIGNMENT
return to_bytes(info), nil
}
error = transmute(Odin_AllocatorError) error_
return
}
varena_odin_allocator :: proc(arena: ^VArena) -> (allocator: Odin_Allocator) {
allocator.procedure = varena_odin_allocator_proc
allocator.data = arena
return
}
when ODIN_DEBUG {
varena_ainfo :: #force_inline proc "contextless" (arena: ^VArena) -> AllocatorInfo { return AllocatorInfo{proc_id = .VArena, data = arena} }
varena_allocator :: #force_inline proc "contextless" (arena: ^VArena) -> Odin_Allocator { return transmute(Odin_Allocator) AllocatorInfo{proc_id = .VArena, data = arena} }
}
else {
varena_ainfo :: #force_inline proc "contextless" (arena: ^VArena) -> AllocatorInfo { return AllocatorInfo{procedure = varena_allocator_proc, data = arena} }
varena_allocator :: #force_inline proc "contextless" (arena: ^VArena) -> Odin_Allocator { return transmute(Odin_Allocator) AllocatorInfo{procedure = varena_allocator_proc, data = arena} }
}
varena_push_item :: #force_inline proc(va: ^VArena, $Type: typeid, alignment: int = DEFAULT_ALIGNMENT, should_zero := true, location := #caller_location
) -> (^Type, AllocatorError) {
raw, error := varena_alloc(va, size_of(Type), alignment, should_zero, location)
return transmute(^Type) cursor(raw), error
}
varena_push_slice :: #force_inline proc(va: ^VArena, $Type: typeid, amount: int, alignment: int = DEFAULT_ALIGNMENT, should_zero := true, location := #caller_location
) -> ([]Type, AllocatorError) {
raw, error := varena_alloc(va, size_of(Type) * amount, alignment, should_zero, location)
return slice(transmute([^]Type) cursor(raw), len(raw) / size_of(Type)), error
}

View File

@@ -0,0 +1,198 @@
package grime
/*
Arena (Chained Virtual Areans):
*/
ArenaFlags :: bit_set[ArenaFlag; u32]
ArenaFlag :: enum u32 {
No_Large_Pages,
No_Chaining,
}
Arena :: struct {
backing: ^VArena,
prev: ^Arena,
current: ^Arena,
base_pos: int,
pos: int,
flags: ArenaFlags,
}
arena_make :: proc(reserve_size : int = Mega * 64, commit_size : int = Mega * 64, base_addr: uintptr = 0, flags: ArenaFlags = {}) -> (^Arena, AllocatorError) {
header_size := align_pow2(size_of(Arena), DEFAULT_ALIGNMENT)
current, error := varena_make(reserve_size, commit_size, base_addr, transmute(VArenaFlags) flags)
if ensure(error == .None) do return nil, error
arena: ^Arena; arena, error = varena_push_item(current, Arena, 1)
if ensure(error == .None) do return nil, error
arena^ = Arena {
backing = current,
prev = nil,
current = arena,
base_pos = 0,
pos = header_size,
flags = flags,
}
return arena, .None
}
arena_alloc :: proc(arena: ^Arena, size: int, alignment: int = DEFAULT_ALIGNMENT, should_zero := true, loc := #caller_location) -> (allocation: []byte, error: AllocatorError) {
assert(arena != nil)
active := arena.current
size_requested := size
size_aligned := align_pow2(size_requested, alignment)
pos_pre := active.pos
pos_pst := pos_pre + size_aligned
reserved := int(active.backing.reserved)
should_chain := (.No_Chaining not_in arena.flags) && (reserved < pos_pst)
if should_chain {
new_arena: ^Arena; new_arena, error = arena_make(reserved, active.backing.commit_size, 0, transmute(ArenaFlags) active.backing.flags)
if ensure(error == .None) do return
new_arena.base_pos = active.base_pos + reserved
sll_stack_push_n(& arena.current, & new_arena, & new_arena.prev)
new_arena.prev = active
active = arena.current
}
result_ptr := transmute([^]byte) (uintptr(active) + uintptr(pos_pre))
vresult: []byte; vresult, error = varena_alloc(active.backing, size_aligned, alignment, should_zero)
if ensure(error == .None) do return
assert(cursor(vresult) == result_ptr)
active.pos = pos_pst
allocation = slice(result_ptr, size)
return
}
arena_grow :: proc(arena: ^Arena, old_allocation: []byte, requested_size: int, alignment: int = DEFAULT_ALIGNMENT, zero_memory := true, loc := #caller_location
) -> (allocation: []byte, error: AllocatorError)
{
active := arena.current
if len(old_allocation) == 0 { allocation = {}; return }
alloc_end := end(old_allocation)
arena_end := transmute([^]byte) (uintptr(active) + uintptr(active.pos))
if alloc_end == arena_end
{
// Can grow in place
grow_amount := requested_size - len(old_allocation)
aligned_grow := align_pow2(grow_amount, alignment)
if active.pos + aligned_grow <= cast(int) active.backing.reserved
{
vresult: []byte; vresult, error = varena_alloc(active.backing, aligned_grow, alignment, zero_memory)
if ensure(error == .None) do return
active.pos += aligned_grow
allocation = slice(cursor(old_allocation), requested_size)
return
}
}
// Can't grow in place, allocate new
allocation, error = arena_alloc(arena, requested_size, alignment, false)
if ensure(error == .None) do return
copy(allocation, old_allocation)
zero(cursor(allocation)[len(old_allocation):], (requested_size - len(old_allocation)) * int(zero_memory))
return
}
arena_shrink :: proc(arena: ^Arena, old_allocation: []byte, requested_size, alignment: int, loc := #caller_location) -> (result: []byte, error: AllocatorError) {
active := arena.current
if ensure(len(old_allocation) != 0) { return }
alloc_end := end(old_allocation)
arena_end := transmute([^]byte) (uintptr(active) + uintptr(active.pos))
if alloc_end != arena_end {
// Not at the end, can't shrink but return adjusted size
result = old_allocation[:requested_size]
}
// Calculate shrinkage
aligned_original := align_pow2(len(old_allocation), DEFAULT_ALIGNMENT)
aligned_new := align_pow2(requested_size, alignment)
pos_reduction := aligned_original - aligned_new
active.pos -= pos_reduction
result, error = varena_shrink(active.backing, old_allocation, aligned_new)
return
}
arena_release :: proc(arena: ^Arena) {
assert(arena != nil)
curr := arena.current
for curr != nil {
prev := curr.prev
varena_release(curr.backing)
curr = prev
}
}
arena_reset :: proc(arena: ^Arena) {
arena_rewind(arena, AllocatorSP { type_sig = arena_allocator_proc, slot = 0 })
}
arena_rewind :: proc(arena: ^Arena, save_point: AllocatorSP, loc := #caller_location) {
assert(arena != nil)
assert(save_point.type_sig == arena_allocator_proc)
header_size := align_pow2(size_of(Arena), DEFAULT_ALIGNMENT)
curr := arena.current
big_pos := max(header_size, save_point.slot)
// Release arenas that are beyond the save point
for curr.base_pos >= big_pos {
prev := curr.prev
varena_release(curr.backing)
curr = prev
}
arena.current = curr
new_pos := big_pos - curr.base_pos; assert(new_pos <= curr.pos)
curr.pos = new_pos
varena_rewind(curr.backing, { type_sig = varena_allocator_proc, slot = curr.pos + size_of(VArena) })
}
arena_save :: #force_inline proc(arena: ^Arena) -> AllocatorSP { return { type_sig = arena_allocator_proc, slot = arena.base_pos + arena.current.pos } }
arena_allocator_proc :: proc(input: AllocatorProc_In, output: ^AllocatorProc_Out) {
assert(output != nil)
assert(input.data != nil)
arena := transmute(^Arena) input.data
switch input.op {
case .Alloc, .Alloc_NoZero:
output.allocation, output.error = arena_alloc(arena, input.requested_size, input.alignment, input.op == .Alloc, input.loc)
return
case .Free:
output.error = .Mode_Not_Implemented
case .Reset:
arena_reset(arena)
case .Grow, .Grow_NoZero:
output.allocation, output.error = arena_grow(arena, input.old_allocation, input.requested_size, input.alignment, input.op == .Grow, input.loc)
case .Shrink:
output.allocation, output.error = arena_shrink(arena, input.old_allocation, input.requested_size, input.alignment, input.loc)
case .Rewind:
arena_rewind(arena, input.save_point, input.loc)
case .SavePoint:
output.save_point = arena_save(arena)
case .Query:
output.features = {.Alloc, .Reset, .Grow, .Shrink, .Rewind, .Actually_Resize, .Is_Owner, .Hint_Fast_Bump }
output.max_alloc = int(arena.backing.reserved)
output.min_alloc = 0
output.left = output.max_alloc
output.save_point = arena_save(arena)
case .Is_Owner:
output.error = .Mode_Not_Implemented
case .Startup, .Shutdown, .Thread_Start, .Thread_Stop:
output.error = .Mode_Not_Implemented
}
}
arena_odin_allocator_proc :: proc(
allocator_data : rawptr,
mode : Odin_AllocatorMode,
size : int,
alignment : int,
old_memory : rawptr,
old_size : int,
location : SourceCodeLocation = #caller_location
) -> (data: []byte, alloc_error: Odin_AllocatorError)
{
panic("not implemented")
}
when ODIN_DEBUG {
arena_ainfo :: #force_inline proc "contextless" (arena: ^Arena) -> AllocatorInfo { return AllocatorInfo{proc_id = .Arena, data = arena} }
arena_allocator :: #force_inline proc "contextless" (arena: ^Arena) -> Odin_Allocator { return transmute(Odin_Allocator) AllocatorInfo{proc_id = .Arena, data = arena} }
}
else {
arena_ainfo :: #force_inline proc "contextless" (arena: ^Arena) -> AllocatorInfo { return AllocatorInfo{procedure = arena_allocator_proc, data = arena} }
arena_allocator :: #force_inline proc "contextless" (arena: ^Arena) -> Odin_Allocator { return transmute(Odin_Allocator) AllocatorInfo{procedure = arena_allocator_proc, data = arena} }
}
arena_push_item :: proc()
{
}
arena_push_array :: proc()
{
}

View File

@@ -32,7 +32,7 @@ virtual_reserve_remaining :: proc "contextless" ( using vmem : VirtualMemoryRegi
@(require_results)
virtual_commit :: proc "contextless" ( using vmem : VirtualMemoryRegion, size : uint ) -> ( alloc_error : AllocatorError )
{
if size < committed {
if size < committed {
return .None
}
@@ -40,7 +40,7 @@ virtual_commit :: proc "contextless" ( using vmem : VirtualMemoryRegion, size :
page_size := uint(virtual_get_page_size())
to_commit := memory_align_formula( size, page_size )
alloc_error = core_virtual.commit( base_address, to_commit )
alloc_error = cast(AllocatorError) core_virtual.commit( base_address, to_commit )
if alloc_error != .None {
return alloc_error
}

View File

@@ -0,0 +1,26 @@
package grime
// TODO(Ed): Review this
import "base:runtime"
// TODO(Ed): Support address sanitizer
/*
Pool allocator backed by chained virtual arenas.
*/
Pool_FreeBlock :: struct { next: ^Pool_FreeBlock }
VPool :: struct {
arenas: ^Arena,
block_size: uint,
// alignment: uint,
free_list_head: ^Pool_FreeBlock,
}
pool_make :: proc() -> (pool: VPool, error: AllocatorError)
{
panic("not implemented")
// return
}

View File

@@ -0,0 +1,15 @@
package grime
VSlabSizeClass :: struct {
vmem_reserve: uint,
block_size: uint,
block_alignment: uint,
}
Slab_Max_Size_Classes :: 24
SlabPolicy :: FStack(VSlabSizeClass, Slab_Max_Size_Classes)
VSlab :: struct {
pools: FStack(VPool, Slab_Max_Size_Classes),
}

3
code2/gui_code/Readme.md Normal file
View File

@@ -0,0 +1,3 @@
# gui_code
This is the UI package used by sectr. It's meant to be optimial for composing complex code visualizations in soft real-time.

View File

@@ -1,3 +1,5 @@
# Host Module
The sole job of this module is to provide a bare launch pad and runtime module hot-reload support for the client module (sectr). To achieve this the static memory of the client module is tracked by the host and provides an api for the client to reload itself when a change is detected. The client is reponsible for populating the static memory reference and doing anything else it needs via the host api that it cannot do on its own.
Uses the core's Arena allocator.

View File

@@ -1,34 +1,62 @@
package host
// TODO(Ed): Remove this
import "core:mem"
//region STATIC MEMORY
// All program defined process memory here. (There will still be artifacts from the OS CRT and third-party pacakges)
host_memory: ProcessMemory
@(thread_local) thread_memory: ThreadMemory
Path_Logs :: "../logs"
when ODIN_OS == .Windows
{
Path_Sectr_Module :: "sectr.dll"
Path_Sectr_Live_Module :: "sectr_live.dll"
Path_Sectr_Debug_Symbols :: "sectr.pdb"
//endregion STATIC MEMORY
//region HOST RUNTIME
load_client_api :: proc(version_id: int) -> (loaded_module: Client_API) {
profile(#procedure)
using loaded_module
// Make sure we have a dll to work with
file_io_err: OS_Error; write_time, file_io_err = file_last_write_time_by_name("sectr.dll")
if file_io_err != OS_ERROR_NONE { panic_contextless( "Could not resolve the last write time for sectr") }
//TODO(Ed): Lets try to minimize this...
thread_sleep( Millisecond * 25 )
// Get the live dll loaded up
file_copy_sync( Path_Sectr_Module, Path_Sectr_Live_Module, allocator = context.temp_allocator )
did_load: bool; lib, did_load = os_lib_load( Path_Sectr_Live_Module )
if ! did_load do panic( "Failed to load the sectr module.")
startup = transmute( type_of( host_memory.client_api.startup)) os_lib_get_proc(lib, "startup")
shutdown = transmute( type_of( host_memory.client_api.shutdown)) os_lib_get_proc(lib, "sectr_shutdown")
tick_lane_startup = transmute( type_of( host_memory.client_api.tick_lane_startup)) os_lib_get_proc(lib, "tick_lane_startup")
job_worker_startup = transmute( type_of( host_memory.client_api.job_worker_startup)) os_lib_get_proc(lib, "job_worker_startup")
hot_reload = transmute( type_of( host_memory.client_api.hot_reload)) os_lib_get_proc(lib, "hot_reload")
tick_lane = transmute( type_of( host_memory.client_api.tick_lane)) os_lib_get_proc(lib, "tick_lane")
clean_frame = transmute( type_of( host_memory.client_api.clean_frame)) os_lib_get_proc(lib, "clean_frame")
jobsys_worker_tick = transmute( type_of( host_memory.client_api.jobsys_worker_tick)) os_lib_get_proc(lib, "jobsys_worker_tick")
if startup == nil do panic("Failed to load sectr.startup symbol" )
if shutdown == nil do panic("Failed to load sectr.shutdown symbol" )
if tick_lane_startup == nil do panic("Failed to load sectr.tick_lane_startup symbol" )
if job_worker_startup == nil do panic("Failed to load sectr.job_worker_startup symbol" )
if hot_reload == nil do panic("Failed to load sectr.hot_reload symbol" )
if tick_lane == nil do panic("Failed to load sectr.tick_lane symbol" )
if clean_frame == nil do panic("Failed to load sectr.clean_frame symbol" )
if jobsys_worker_tick == nil do panic("Failed to laod sectr.jobsys_worker_tick")
lib_version = version_id
return
}
// Only static memory host has.
host_memory: HostMemory
@(thread_local)
thread_memory: ThreadMemory
master_prepper_proc :: proc(thread: ^SysThread) {}
main :: proc()
{
// TODO(Ed): Change this
host_scratch: mem.Arena; mem.arena_init(& host_scratch, host_memory.host_scratch[:])
context.allocator = mem.arena_allocator(& host_scratch)
context.temp_allocator = context.allocator
thread_memory.index = .Master_Prepper
thread_id := thread_current_id()
// Setup host arenas
// TODO(Ed): Preferablly I want to eliminate usage of this. We should be able to do almost everything here with fixed allocations..
arena_init(& host_memory.host_persist, host_memory.host_persist_buf[:])
arena_init(& host_memory.host_scratch, host_memory.host_scratch_buf[:])
context.allocator = arena_allocator(& host_memory.host_persist)
context.temp_allocator = arena_allocator(& host_memory.host_scratch)
// Setup the "Master Prepper" thread
{
thread_memory.id = .Master_Prepper
thread_id := thread_current_id()
using thread_memory
host_memory.threads[WorkerID.Master_Prepper] = new(SysThread)
system_ctx = host_memory.threads[WorkerID.Master_Prepper]
system_ctx.creation_allocator = {}
system_ctx.procedure = master_prepper_proc
when ODIN_OS == .Windows {
@@ -37,37 +65,239 @@ main :: proc()
system_ctx.id = cast(int) system_ctx.win32_thread_id
}
}
write_time, result := file_last_write_time_by_name("sectr.dll")
if result != OS_ERROR_NONE {
panic_contextless( "Could not resolve the last write time for sectr")
}
thread_sleep( Millisecond * 100 )
live_file := Path_Sectr_Live_Module
file_copy_sync( Path_Sectr_Module, live_file, allocator = context.temp_allocator )
when SHOULD_SETUP_PROFILERS
{
lib, load_result := os_lib_load( live_file )
if ! load_result {
panic( "Failed to load the sectr module." )
}
startup := cast( type_of( host_memory.client_api.startup )) os_lib_get_proc(lib, "startup")
hot_reload := cast( type_of( host_memory.client_api.hot_reload)) os_lib_get_proc(lib, "hot_reload")
if startup == nil do panic("Failed to load sectr.startup symbol" )
if hot_reload == nil do panic("Failed to load sectr.reload symbol" )
host_memory.client_api.lib = lib
host_memory.client_api.startup = startup
host_memory.client_api.hot_reload = hot_reload
// Setup main profiler
host_memory.spall_context = spall_context_create(Path_Sectr_Spall_Record)
grime_set_profiler_module_context(& host_memory.spall_context)
thread_memory.spall_buffer = spall_buffer_create(thread_memory.spall_buffer_backing[:], cast(u32) thread_memory.system_ctx.id)
grime_set_profiler_thread_buffer(& thread_memory.spall_buffer)
}
host_memory.host_api.sync_client_module = sync_client_api
// Setup the logger
path_logger_finalized: string
{
profile("Setup the logger")
// Generating the logger's name, it will be used when the app is shutting down.
{
startup_time := time_now()
year, month, day := time_date( startup_time)
hour, min, sec := time_clock_from_time( startup_time)
if ! os_is_directory( Path_Logs ) { os_make_directory( Path_Logs ) }
timestamp := str_pfmt_tmp("%04d-%02d-%02d_%02d-%02d-%02d", year, month, day, hour, min, sec)
host_memory.path_logger_finalized = str_pfmt("%s/sectr_%v.log", Path_Logs, timestamp)
}
logger_init( & host_memory.host_logger, "Sectr Host", str_pfmt_tmp("%s/sectr.log", Path_Logs))
context.logger = to_odin_logger( & host_memory.host_logger ); {
// Log System Context
builder := strbuilder_make_len(16 * Kilo, context.temp_allocator)
str_pfmt_builder( & builder, "Core Count: %v, ", os_core_count() )
str_pfmt_builder( & builder, "Page Size: %v", os_page_size() )
log_print( to_str(builder) )
}
free_all(context.temp_allocator)
}
context.logger = to_odin_logger( & host_memory.host_logger )
/*Load the Enviornment API for the first-time*/{
host_memory.client_api = load_client_api( 1 )
verify( host_memory.client_api.lib_version != 0, "Failed to initially load the sectr module" )
}
// Client API Startup
host_memory.client_api.startup(& host_memory, & thread_memory)
{
profile("thread_wide_startup")
assert(thread_memory.id == .Master_Prepper)
{
profile("Tick Lanes")
host_memory.tick_running = true
host_memory.tick_lanes = THREAD_TICK_LANES
barrier_init(& host_memory.lane_sync, THREAD_TICK_LANES)
when THREAD_TICK_LANES > 1 {
for id in 1 ..= (THREAD_TICK_LANES - 1) {
lane_thread := thread_create_ex(host_tick_lane_entrypoint, .High, enum_to_string(cast(WorkerID)id))
lane_thread.user_index = id
host_memory.threads[lane_thread.user_index] = lane_thread
thread_start(lane_thread)
}
}
}
// Job System Setup
{
profile("Job System")
host_memory.job_system.running = true
host_memory.job_system.worker_num = THREAD_JOB_WORKERS
for & list in host_memory.job_system.job_lists {
list = {}
}
// Determine number of physical cores
barrier_init(& host_memory.job_hot_reload_sync, THREAD_JOB_WORKERS + 1)
for id in THREAD_JOB_WORKER_ID_START ..< THREAD_JOB_WORKER_ID_END {
log_print_fmt("Spawned job worker: %v", cast(WorkerID) id)
worker_thread := thread_create_ex(host_job_worker_entrypoint, .Normal, enum_to_string(cast(WorkerID) id))
worker_thread.user_index = int(id)
host_memory.threads[worker_thread.user_index] = worker_thread
thread_start(worker_thread)
}
}
barrier_init(& host_memory.lane_job_sync, THREAD_TICK_LANES + THREAD_JOB_WORKERS)
}
free_all(context.temp_allocator)
host_tick_lane()
host_lane_shutdown()
profile_begin("Host Shutdown")
if thread_memory.id == .Master_Prepper {
thread_join_multiple(.. host_memory.threads[1:THREAD_TICK_LANES + THREAD_JOB_WORKERS])
}
host_memory.client_api.shutdown();
unload_client_api( & host_memory.client_api )
log_print("Succesfuly closed")
file_close( host_memory.host_logger.file )
file_rename( str_pfmt_tmp("%s/sectr.log", Path_Logs), host_memory.path_logger_finalized )
profile_end()
// End profiling
spall_buffer_destroy(& host_memory.spall_context, & thread_memory.spall_buffer)
spall_context_destroy( & host_memory.spall_context )
}
host_tick_lane_entrypoint :: proc(lane_thread: ^SysThread) {
thread_memory.system_ctx = lane_thread
thread_memory.id = cast(WorkerID) lane_thread.user_index
when SHOULD_SETUP_PROFILERS
{
thread_memory.spall_buffer = spall_buffer_create(thread_memory.spall_buffer_backing[:], cast(u32) thread_memory.system_ctx.id)
host_memory.client_api.tick_lane_startup(& thread_memory)
grime_set_profiler_thread_buffer(& thread_memory.spall_buffer)
}
host_tick_lane()
host_lane_shutdown()
}
host_tick_lane :: proc()
{
delta_ns: Duration
host_tick := time_tick_now()
for ; sync_load(& host_memory.tick_running, .Relaxed);
{
profile("Host Tick")
leader := barrier_wait(& host_memory.lane_sync)
running: b64 = host_memory.client_api.tick_lane(duration_seconds(delta_ns), delta_ns) == false
if thread_memory.id == .Master_Prepper {
sync_store(& host_memory.tick_running, running, .Release)
}
host_memory.client_api.clean_frame()
delta_ns = time_tick_lap_time( & host_tick )
host_tick = time_tick_now()
// Lanes are synced before doing running check..
sync_client_api()
}
}
host_lane_shutdown :: proc()
{
profile(#procedure)
if thread_memory.id == .Master_Prepper {
jobs_enqueued := true
// if jobs_enqueued == false do debug_trap()
for ; jobs_enqueued; {
jobs_enqueued = false
jobs_enqueued |= host_memory.job_system.job_lists[.Normal].head != nil
jobs_enqueued |= host_memory.job_system.job_lists[.Low ].head != nil
jobs_enqueued |= host_memory.job_system.job_lists[.High ].head != nil
// if jobs_enqueued == false do debug_trap()
}
sync_store(& host_memory.job_system.running, false, .Release)
}
if thread_memory.id != .Master_Prepper {
spall_buffer_destroy( & host_memory.spall_context, & thread_memory.spall_buffer )
}
leader := barrier_wait(& host_memory.lane_job_sync)
}
host_job_worker_entrypoint :: proc(worker_thread: ^SysThread)
{
thread_memory.system_ctx = worker_thread
thread_memory.id = cast(WorkerID) worker_thread.user_index
when SHOULD_SETUP_PROFILERS
{
thread_memory.spall_buffer = spall_buffer_create(thread_memory.spall_buffer_backing[:], cast(u32) thread_memory.system_ctx.id)
host_memory.client_api.job_worker_startup(& thread_memory)
grime_set_profiler_thread_buffer(& thread_memory.spall_buffer)
}
jobs_enqueued := false
jobs_enqueued |= host_memory.job_system.job_lists[.Normal].head != nil
jobs_enqueued |= host_memory.job_system.job_lists[.Low ].head != nil
jobs_enqueued |= host_memory.job_system.job_lists[.High ].head != nil
delta_ns: Duration
host_tick := time_tick_now()
for ; jobs_enqueued || sync_load(& host_memory.job_system.running, .Relaxed);
{
// profile("Host Job Tick")
host_memory.client_api.jobsys_worker_tick(duration_seconds(delta_ns), delta_ns)
delta_ns = time_tick_lap_time( & host_tick )
host_tick = time_tick_now()
jobs_enqueued = false
jobs_enqueued |= host_memory.job_system.job_lists[.Normal].head != nil
jobs_enqueued |= host_memory.job_system.job_lists[.Low ].head != nil
jobs_enqueued |= host_memory.job_system.job_lists[.High ].head != nil
if jobs_enqueued == false && sync_load(& host_memory.client_api_hot_reloaded, .Acquire) {
// Signals to main hread when all jobs have drained.
leader :=barrier_wait(& host_memory.job_hot_reload_sync)
// Job threads wait here until client module is back
leader =barrier_wait(& host_memory.job_hot_reload_sync)
host_memory.client_api.hot_reload(& host_memory, & thread_memory)
}
}
spall_buffer_destroy( & host_memory.spall_context, & thread_memory.spall_buffer )
// Were exiting, wait for tick lanes.
leader := barrier_wait(& host_memory.lane_job_sync)
}
@export
sync_client_api :: proc()
{
// Fill out detection and reloading of client api.
profile(#procedure)
// We don't want any lanes to be in client callstack during a hot-reload
leader := barrier_wait(& host_memory.lane_sync)
if thread_memory.id == .Master_Prepper
{
write_time, result := file_last_write_time_by_name( Path_Sectr_Module );
if result == OS_ERROR_NONE && host_memory.client_api.write_time != write_time
{
profile("Master_Prepper: Reloading client module")
sync_store(& host_memory.client_api_hot_reloaded, true, .Release)
// We nee to wait for the job queue to drain.
leader = barrier_wait(& host_memory.job_hot_reload_sync)
{
version_id := host_memory.client_api.lib_version + 1
unload_client_api( & host_memory.client_api )
// Wait for pdb to unlock (linker may still be writting)
for ; file_is_locked( Path_Sectr_Debug_Symbols ) && file_is_locked( Path_Sectr_Live_Module ); {}
thread_sleep( Millisecond * 25 )
host_memory.client_api = load_client_api( version_id )
verify( host_memory.client_api.lib_version != 0, "Failed to hot-reload the sectr module" )
}
leader = barrier_wait(& host_memory.job_hot_reload_sync)
}
}
leader = barrier_wait(& host_memory.lane_sync)
// Lanes are safe to continue.
if sync_load(& host_memory.client_api_hot_reloaded, .Acquire) {
host_memory.client_api.hot_reload(& host_memory, & thread_memory)
}
}
unload_client_api :: proc( module : ^Client_API )
{
profile(#procedure)
os_lib_unload( module.lib )
file_remove( Path_Sectr_Live_Module )
module^ = {}
log_print("Unloaded client API")
}
//endregion HOST RUNTIME

View File

@@ -1,50 +1,200 @@
package host
import "base:builtin"
// Odin_OS_Type :: type_of(ODIN_OS)
import "base:intrinsics"
// atomic_thread_fence :: intrinsics.atomic_thread_fence
// mem_zero :: intrinsics.mem_zero
// mem_zero_volatile :: intrinsics.mem_zero_volatile
// mem_copy :: intrinsics.mem_copy_non_overlapping
// mem_copy_overlapping :: intrinsics.mem_copy
import "base:runtime"
// Assertion_Failure_Proc :: runtime.Assertion_Failure_Proc
// Logger :: runtime.Logger
debug_trap :: runtime.debug_trap
import "core:dynlib"
os_lib_load :: dynlib.load_library
os_lib_unload :: dynlib.unload_library
os_lib_get_proc :: dynlib.symbol_address
import core_os "core:os"
file_last_write_time_by_name :: core_os.last_write_time_by_name
OS_ERROR_NONE :: core_os.ERROR_NONE
import "core:fmt"
str_pfmt_builder :: fmt.sbprintf
str_pfmt_buffer :: fmt.bprintf
str_pfmt :: fmt.aprintf
str_pfmt_tmp :: fmt.tprintf
import "core:log"
LoggerLevel :: log.Level
import "core:mem"
Arena :: mem.Arena
arena_allocator :: mem.arena_allocator
arena_init :: mem.arena_init
import "core:os"
OS_ERROR_NONE :: os.ERROR_NONE
OS_Error :: os.Error
FileTime :: os.File_Time
file_close :: os.close
file_last_write_time_by_name :: os.last_write_time_by_name
file_remove :: os.remove
file_rename :: os.rename
file_status :: os.stat
os_is_directory :: os.is_dir
os_make_directory :: os.make_directory
os_core_count :: os.processor_core_count
os_page_size :: os.get_page_size
process_exit :: os.exit
import "core:prof/spall"
spall_context_create :: spall.context_create
spall_context_destroy :: spall.context_destroy
spall_buffer_create :: spall.buffer_create
spall_buffer_destroy :: spall.buffer_destroy
import "core:reflect"
enum_to_string :: reflect.enum_string
import "core:strings"
strbuilder_from_bytes :: strings.builder_from_bytes
strbuilder_make_len :: strings.builder_make_len
builder_to_str :: strings.to_string
import "core:sync"
thread_current_id :: sync.current_thread_id
Barrier :: sync.Barrier
barrier_init :: sync.barrier_init
barrier_wait :: sync.barrier_wait
thread_current_id :: sync.current_thread_id
// Cache coherent loads and stores (synchronizes relevant cache blocks/lines)
sync_load :: sync.atomic_load_explicit
sync_store :: sync.atomic_store_explicit
import "core:time"
Millisecond :: time.Millisecond
Second :: time.Second
Duration :: time.Duration
duration_seconds :: time.duration_seconds
thread_sleep :: time.sleep
Millisecond :: time.Millisecond
Second :: time.Second
Duration :: time.Duration
time_clock_from_time :: time.clock_from_time
duration_seconds :: time.duration_seconds
time_date :: time.date
time_now :: time.now
thread_sleep :: time.sleep
time_tick_now :: time.tick_now
time_tick_lap_time :: time.tick_lap_time
import "core:thread"
SysThread :: thread.Thread
SysThread :: thread.Thread
thread_create :: thread.create
thread_create_ex :: thread.create_ex
thread_start :: thread.start
thread_destroy :: thread.destroy
thread_join_multiple :: thread.join_multiple
thread_terminate :: thread.terminate
import grime "codebase:grime"
file_copy_sync :: grime.file_copy_sync
DISABLE_GRIME_PROFILING :: grime.DISABLE_PROFILING
grime_set_profiler_module_context :: grime.set_profiler_module_context
grime_set_profiler_thread_buffer :: grime.set_profiler_thread_buffer
ensure :: grime.ensure
fatal :: grime.fatal
verify :: grime.verify
file_is_locked :: grime.file_is_locked
logger_init :: grime.logger_init
to_odin_logger :: grime.to_odin_logger
// Need to have it with un-wrapped allocator
// file_copy_sync :: grime.file_copy_sync
file_copy_sync :: proc( path_src, path_dst: string, allocator := context.allocator ) -> b32 {
file_size : i64
{
path_info, result := file_status( path_src, allocator )
if result != OS_ERROR_NONE {
log_print_fmt("Could not get file info: %v", result, LoggerLevel.Error )
return false
}
file_size = path_info.size
}
src_content, result := os.read_entire_file_from_filename( path_src, allocator )
if ! result {
log_print_fmt( "Failed to read file to copy: %v", path_src, LoggerLevel.Error )
debug_trap()
return false
}
result = os.write_entire_file( path_dst, src_content, false )
if ! result {
log_print_fmt( "Failed to copy file: %v", path_dst, LoggerLevel.Error )
debug_trap()
return false
}
return true
}
import "codebase:sectr"
Client_API :: sectr.ModuleAPI
HostMemory :: sectr.HostMemory
ThreadMemory :: sectr.ThreadMemory
DISABLE_HOST_PROFILING :: sectr.DISABLE_HOST_PROFILING
DISABLE_CLIENT_PROFILING :: sectr.DISABLE_CLIENT_PROFILING
Path_Logs :: sectr.Path_Logs
Path_Sectr_Debug_Symbols :: sectr.Path_Debug_Symbols
Path_Sectr_Live_Module :: sectr.Path_Live_Module
Path_Sectr_Module :: sectr.Path_Module
Path_Sectr_Spall_Record :: sectr.Path_Spall_Record
MAX_THREADS :: sectr.MAX_THREADS
THREAD_TICK_LANES :: sectr.THREAD_TICK_LANES
THREAD_JOB_WORKERS :: sectr.THREAD_JOB_WORKERS
THREAD_JOB_WORKER_ID_START :: sectr.THREAD_JOB_WORKER_ID_START
THREAD_JOB_WORKER_ID_END :: sectr.THREAD_JOB_WORKER_ID_END
Client_API :: sectr.ModuleAPI
ProcessMemory :: sectr.ProcessMemory
ThreadMemory :: sectr.ThreadMemory
WorkerID :: sectr.WorkerID
// ensure :: #force_inline proc( condition : b32, msg : string, location := #caller_location ) {
// if condition do return
// log_print( msg, LoggerLevel.Warning, location )
// debug_trap()
// }
// // TODO(Ed) : Setup exit codes!
// fatal :: #force_inline proc( msg : string, exit_code : int = -1, location := #caller_location ) {
// log_print( msg, LoggerLevel.Fatal, location )
// debug_trap()
// process_exit( exit_code )
// }
// // TODO(Ed) : Setup exit codes!
// verify :: #force_inline proc( condition : b32, msg : string, exit_code : int = -1, location := #caller_location ) {
// if condition do return
// log_print( msg, LoggerLevel.Fatal, location )
// debug_trap()
// process_exit( exit_code )
// }
log_print :: proc( msg : string, level := LoggerLevel.Info, loc := #caller_location ) {
context.allocator = arena_allocator(& host_memory.host_scratch)
context.temp_allocator = arena_allocator(& host_memory.host_scratch)
log.log( level, msg, location = loc )
}
log_print_fmt :: proc( fmt : string, args : ..any, level := LoggerLevel.Info, loc := #caller_location ) {
context.allocator = arena_allocator(& host_memory.host_scratch)
context.temp_allocator = arena_allocator(& host_memory.host_scratch)
log.logf( level, fmt, ..args, location = loc )
}
SHOULD_SETUP_PROFILERS :: \
DISABLE_GRIME_PROFILING == false ||
DISABLE_CLIENT_PROFILING == false ||
DISABLE_HOST_PROFILING == false
@(deferred_none = profile_end, disabled = DISABLE_HOST_PROFILING)
profile :: #force_inline proc "contextless" ( name : string, loc := #caller_location ) {
spall._buffer_begin( & host_memory.spall_context, & thread_memory.spall_buffer, name, "", loc )
}
@(disabled = DISABLE_HOST_PROFILING)
profile_begin :: #force_inline proc "contextless" ( name : string, loc := #caller_location ) {
spall._buffer_begin( & host_memory.spall_context, & thread_memory.spall_buffer, name, "", loc )
}
@(disabled = DISABLE_HOST_PROFILING)
profile_end :: #force_inline proc "contextless" () {
spall._buffer_end( & host_memory.spall_context, & thread_memory.spall_buffer)
}
Kilo :: 1024
Mega :: Kilo * 1024
Giga :: Mega * 1024
Tera :: Giga * 1024
to_str :: proc {
builder_to_str,
}

View File

@@ -1,3 +1,15 @@
# Sectr Package
This is the monolithic package representing the prototype itself. Relative to the host package this represents what define's the client module API, process memory, and thread memory.
Many definitions that are considered independent of the prototype have been lifted to the grime package, vefontcache, or in the future other packages within this codebase collection.
All allocators and containers within Sectr are derived from Grime.
The memory heurstics for sectr are categorized for now into:
* Persistent Static: Never released for process lifetime.
* Persistent Conservative: Can be wiped
* Frame
* File Mappings
* Codebase DB

View File

@@ -0,0 +1,33 @@
package sectr
Path_Assets :: "../assets/"
Path_Shaders :: "../shaders/"
Path_Input_Replay :: "input.sectr_replay"
Path_Logs :: "../logs"
when ODIN_OS == .Windows
{
Path_Module :: "sectr.dll"
Path_Live_Module :: "sectr_live.dll"
Path_Debug_Symbols :: "sectr.pdb"
Path_Spall_Record :: "sectr.spall"
}
DISABLE_CLIENT_PROFILING :: false
DISABLE_HOST_PROFILING :: false
// Hard constraint for Windows
MAX_THREADS :: 64
// TODO(Ed): We can technically hot-reload this (spin up or down lanes on reloads)
THREAD_TICK_LANES :: 2 // Must be at least one for main thread.
THREAD_JOB_WORKERS :: 2 // Must be at least one for latent IO operations.
/*
Job workers are spawned in after tick lanes.
Even if the user adjust them at runtme in the future,
we'd have all threads drain and respawn them from scratch.
*/
THREAD_JOB_WORKER_ID_START :: THREAD_TICK_LANES
THREAD_JOB_WORKER_ID_END :: (THREAD_TICK_LANES + THREAD_JOB_WORKERS)

View File

@@ -1,39 +1,397 @@
package sectr
import "base:runtime"
import c "core:c/libc"
import "core:dynlib"
// Sokol should only be used here and in the client_api_sokol_callbacks.odin
Path_Assets :: "../assets/"
Path_Shaders :: "../shaders/"
Path_Input_Replay :: "input.sectr_replay"
import sokol_app "thirdparty:sokol/app"
import sokol_gfx "thirdparty:sokol/gfx"
import sokol_glue "thirdparty:sokol/glue"
import sokol_gp "thirdparty:sokol/gp"
/*
This definies the client interface for the host process to call into
*/
ModuleAPI :: struct {
lib: dynlib.Library,
// write-time: FileTime,
lib: DynLibrary,
write_time: FileTime,
lib_version : int,
startup: type_of( startup ),
hot_reload: type_of( hot_reload ),
startup: type_of( startup),
shutdown: type_of( sectr_shutdown),
tick_lane_startup: type_of( tick_lane_startup),
job_worker_startup: type_of( job_worker_startup),
hot_reload: type_of( hot_reload),
tick_lane: type_of( tick_lane),
clean_frame: type_of( clean_frame),
jobsys_worker_tick: type_of( jobsys_worker_tick)
}
StartupContext :: struct {}
/*
Called by host.main when it completes its setup.
The goal of startup is to first prapre persistent state,
then prepare for multi-threaded "laned" tick: thread_wide_startup.
*/
@export
startup :: proc(host_mem: ^HostMemory, thread_mem: ^ThreadMemory)
startup :: proc(host_mem: ^ProcessMemory, thread_mem: ^ThreadMemory)
{
dummy : int = 0
dummy += 1
// (Ignore RAD Debugger's values being null)
memory = host_mem
thread = thread_mem
// grime_set_profiler_module_context(& memory.spall_context)
// grime_set_profiler_thread_buffer(& thread.spall_buffer)
profile(#procedure)
thread_wide_startup()
startup_tick := tick_now()
logger_init(& memory.client_memory.logger, "Sectr", memory.host_logger.file_path, memory.host_logger.file)
context.logger = to_odin_logger(& memory.client_memory.logger)
using memory.client_memory
// Configuration Load
// TODO(Ed): Make this actually load from an ini
{
using config
resolution_width = 1000
resolution_height = 600
refresh_rate = 0
cam_min_zoom = 0.001
cam_max_zoom = 5.0
cam_zoom_mode = .Smooth
cam_zoom_smooth_snappiness = 4.0
cam_zoom_sensitivity_smooth = 0.5
cam_zoom_sensitivity_digital = 0.25
cam_zoom_scroll_delta_scale = 0.25
engine_refresh_hz = 240
timing_fps_moving_avg_alpha = 0.9
ui_resize_border_width = 5
// color_theme = App_Thm_Dusk
text_snap_glyph_shape_position = false
text_snap_glyph_render_height = false
text_size_screen_scalar = 1.4
text_size_canvas_scalar = 1.4
text_alpha_sharpen = 0.1
}
Desired_OS_Scheduler_MS :: 1
sleep_is_granular = set__scheduler_granularity( Desired_OS_Scheduler_MS )
// TODO(Ed): String Cache (Not backed by slab!)
// TODO(Ed): Setup input system
// TODO(Ed): Setup sokol_app
// TODO(Ed): Setup sokol_gfx
// TODO(Ed): Setup sokol_gp
// TODO(Ed): Use job system to load fonts!!!
// TODO(Ed): Setup screen ui state
// TODO(Ed): Setup proper workspace scaffold
startup_ms := duration_ms( tick_lap_time( & startup_tick))
log_print_fmt("Startup time: %v ms", startup_ms)
}
thread_wide_startup :: proc()
// NOTE(Ed): For some reason odin's symbols conflict with native foreign symbols...
// Called in host.main after all tick lane or job worker threads have joined.
@export
sectr_shutdown :: proc()
{
context.logger = to_odin_logger(& memory.client_memory.logger)
// TODO(Ed): Shut down font system
// TODO(Ed): Shutdown sokol gp, gfx, and app.
log_print("Client module shutdown complete")
}
/*
Called by host.sync_client_api when the client module has be reloaded.
Threads will eventually return to their tick_lane upon completion.
*/
@export
hot_reload :: proc(host_mem: ^ProcessMemory, thread_mem: ^ThreadMemory)
{
// Critical reference synchronization
{
thread = thread_mem
if thread.id == .Master_Prepper {
sync_store(& memory, host_mem, .Release)
// grime_set_profiler_module_context(& memory.spall_context)
}
else {
// NOTE(Ed): This is problably not necessary, they're just loops for my sanity.
for ; memory == nil; { sync_load(& memory, .Acquire) }
for ; thread == nil; { thread = thread_mem }
}
// grime_set_profiler_thread_buffer(& thread.spall_buffer)
}
profile(#procedure)
// Do hot-reload stuff...
{
context.logger = to_odin_logger(& memory.client_memory.logger)
// TODO(Ed): Setup context alloators
// TODO(Ed): Patch Sokol contextes
// We hopefully don't have to patch third-party allocators anymore per-hot-reload.
{
}
// TODO(Ed): Reload the font system
log_print("Module reloaded")
}
// Critical reference synchronization
{
leader := barrier_wait(& memory.lane_job_sync)
if thread.id == .Master_Prepper {
sync_store(& memory.client_api_hot_reloaded, false, .Release)
}
else {
// NOTE(Ed): This is problably not necessary, they're just loops for my sanity.
for ; memory.client_api_hot_reloaded == true; { sync_load(& memory.client_api_hot_reloaded, .Acquire) }
}
leader = barrier_wait(& memory.lane_job_sync)
}
}
/*
Called by host_tick_lane_startup
Used for lane specific startup operations
*/
@export
tick_lane_startup :: proc(thread_mem: ^ThreadMemory)
{
if thread_mem.id != .Master_Prepper {
thread = thread_mem
// grime_set_profiler_thread_buffer(& thread.spall_buffer)
}
profile(#procedure)
}
@export
hot_reload :: proc(host_mem: ^HostMemory, thread_mem: ^ThreadMemory)
job_worker_startup :: proc(thread_mem: ^ThreadMemory)
{
if thread_mem.id != .Master_Prepper {
thread = thread_mem
// grime_set_profiler_thread_buffer(& thread.spall_buffer)
}
profile(#procedure)
}
/*
Host handles the loop.
(We need threads to be outside of client callstack in the event of a hot-reload)
*/
@export
tick_lane :: proc(host_delta_time_ms: f64, host_delta_ns: Duration) -> (should_close: bool = false)
{
profile(#procedure)
profile_begin("sokol_app: pre_client_tick")
// should_close |= cast(b64) sokol_app.pre_client_frame() // TODO(Ed): SOKOL!
profile_end()
profile_begin("Client Tick")
{
should_close = tick_lane_work_frame(host_delta_time_ms)
}
client_tick := tick_now()
profile_end()
profile_begin("sokol_app: post_client_tick")
// sokol_app.post_client_frame() // TODO(Ed): SOKOL!
profile_end()
tick_lane_frametime(& client_tick, host_delta_time_ms, host_delta_ns)
return sync_load(& should_close, .Acquire)
}
// Note(Ed): Necessary for sokol_app_frame_callback
tick_lane_work_frame :: proc(host_delta_time_ms: f64) -> (should_close: bool)
{
profile("Work frame")
context.logger = to_odin_logger( & memory.client_memory.logger )
// TODO(Ed): Setup frame alloator
if thread.id == .Master_Prepper
{
// config := & memory.client_memory.config
// debug := & memory.client_memory.debug
// debug.draw_ui_box_bounds_points = false
// debug.draw_ui_padding_bounds = false
// debug.draw_ui_content_bounds = false
// config.engine_refresh_hz = 165
// config.color_theme = App_Thm_Light
// config.color_theme = App_Thm_Dusk
// config.color_theme = App_Thm_Dark
// sokol_width := sokol_app.widthf()
// sokol_height := sokol_app.heightf()
// window := & get_state().app_window
// if int(window.extent.x) != int(sokol_width) || int(window.extent.y) != int(sokol_height) {
// window.resized = true
// window.extent.x = sokol_width * 0.5
// window.extent.y = sokol_height * 0.5
// log("sokol_app: Event-based frame callback triggered (detected a resize")
// }
}
// Test dispatching 64 jobs during hot_reload loop (when the above store is uncommented)
if true
{
if thread.id == .Master_Prepper {
profile("dispatching")
for job_id := 1; job_id < JOB_TEST_NUM; job_id += 1 {
memory.job_info_reload[job_id].id = job_id
memory.job_reload[job_id] = make_job_raw(& memory.job_group_reload, & memory.job_info_reload[job_id], test_job, {}, "Job Test (Hot-Reload)")
job_dispatch_single(& memory.job_reload[job_id], .Normal)
}
}
should_close = true
}
// should_close |= update( host_delta_time_ms )
// render()
return
}
@export
jobsys_worker_tick :: proc(host_delta_time_ms: f64, host_delta_ns: Duration)
{
// profile("Worker Tick")
context.logger = to_odin_logger(& memory.client_memory.logger)
ORDERED_PRIORITIES :: [len(JobPriority)]JobPriority{.High, .Normal, .Low}
block: for priority in ORDERED_PRIORITIES
{
if memory.job_system.job_lists[priority].head == nil do continue
if sync_mutex_try_lock(& memory.job_system.job_lists[priority].mutex)
{
profile("Executing Job")
if job := memory.job_system.job_lists[priority].head; job != nil
{
if thread.id in job.ignored {
sync_mutex_unlock(& memory.job_system.job_lists[priority].mutex)
continue
}
memory.job_system.job_lists[priority].head = job.next
sync_mutex_unlock(& memory.job_system.job_lists[priority].mutex)
assert(job.group != nil)
assert(job.cb != nil)
job.cb(job.data)
sync_sub(& job.group.counter, 1, .Seq_Cst)
break block
}
sync_mutex_unlock(& memory.job_system.job_lists[priority].mutex)
}
}
// Updating worker timing
{
// TODO(Ed): Setup timing
}
}
TestJobInfo :: struct {
id: int,
}
test_job :: proc(data: rawptr)
{
profile(#procedure)
info := cast(^TestJobInfo) data
log_print_fmt("Test job succeeded: %v", info.id)
}
Frametime_High_Perf_Threshold_MS :: 1 / 240.0
// TODO(Ed): Lift this to be usable by both tick lanes and job worker threads.
tick_lane_frametime :: proc(client_tick: ^Tick, host_delta_time_ms: f64, host_delta_ns: Duration, can_sleep := true)
{
profile(#procedure)
config := app_config()
if thread.id == .Master_Prepper
{
frametime := & memory.client_memory.frametime
frametime.target_ms = 1.0 / f64(config.engine_refresh_hz)
sub_ms_granularity_required := frametime.target_ms <= Frametime_High_Perf_Threshold_MS
frametime.delta_ns = tick_lap_time( client_tick )
frametime.delta_ms = duration_ms( frametime.delta_ns )
frametime.delta_seconds = duration_seconds( host_delta_ns )
frametime.elapsed_ms = frametime.delta_ms + host_delta_time_ms
if frametime.elapsed_ms < frametime.target_ms
{
sleep_ms := frametime.target_ms - frametime.elapsed_ms
pre_sleep_tick := tick_now()
if can_sleep && sleep_ms > 0 {
// thread_sleep( cast(Duration) sleep_ms * MS_To_NS )
// thread__highres_wait( sleep_ms )
}
sleep_delta_ns := tick_lap_time( & pre_sleep_tick)
sleep_delta_ms := duration_ms( sleep_delta_ns )
if sleep_delta_ms < sleep_ms {
// log( str_fmt_tmp("frametime sleep was off by: %v ms", sleep_delta_ms - sleep_ms ))
}
frametime.elapsed_ms += sleep_delta_ms
for ; frametime.elapsed_ms < frametime.target_ms; {
sleep_delta_ns = tick_lap_time( & pre_sleep_tick)
sleep_delta_ms = duration_ms( sleep_delta_ns )
frametime.elapsed_ms += sleep_delta_ms
}
}
config.timing_fps_moving_avg_alpha = 0.99
frametime.avg_ms = mov_avg_exp( f64(config.timing_fps_moving_avg_alpha), frametime.elapsed_ms, frametime.avg_ms )
frametime.fps_avg = 1 / (frametime.avg_ms * MS_To_S)
if frametime.elapsed_ms > 60.0 {
log_print_fmt("Big tick! %v ms", frametime.elapsed_ms, LoggerLevel.Warning)
}
frametime.current_frame += 1
}
else
{
// Non-main thread tick lane timing (since they are in lock-step this should be minimal delta)
}
}
@export
clean_frame :: proc()
{
profile(#procedure)
context.logger = to_odin_logger(& memory.client_memory.logger)
if thread.id == .Master_Prepper
{
// mem_reset( frame_allocator() )
}
return
}

View File

@@ -0,0 +1,259 @@
package sectr
import sokol_app "thirdparty:sokol/app"
//region Sokol App
sokol_app_init_callback :: proc "c" () {
context = memory.client_memory.sokol_context
log_print("sokol_app: Confirmed initialization")
}
// This is being filled in but we're directly controlling the lifetime of sokol_app's execution.
// So this will only get called during window pan or resize events (on Win32 at least)
sokol_app_frame_callback :: proc "c" ()
{
profile(#procedure)
context = memory.client_memory.sokol_context
should_close: bool
sokol_width := sokol_app.widthf()
sokol_height := sokol_app.heightf()
window := & memory.client_memory.app_window
// if int(window.extent.x) != int(sokol_width) || int(window.extent.y) != int(sokol_height) {
window.resized = true
window.extent.x = cast(f32) i32(sokol_width * 0.5)
window.extent.y = cast(f32) i32(sokol_height * 0.5)
// log("sokol_app: Event-based frame callback triggered (detected a resize")
// }
// sokol_app is the only good reference for a frame-time at this point.
sokol_delta_ms := sokol_app.frame_delta()
sokol_delta_ns := transmute(Duration) sokol_delta_ms * MS_To_NS
profile_begin("Client Tick")
client_tick := tick_now()
should_close |= tick_lane_work_frame( sokol_delta_ms )
profile_end()
tick_lane_frametime( & client_tick, sokol_delta_ms, sokol_delta_ns, can_sleep = false )
window.resized = false
}
sokol_app_cleanup_callback :: proc "c" () {
context = memory.client_memory.sokol_context
log_print("sokol_app: Confirmed cleanup")
}
sokol_app_alloc :: proc "c" ( size : uint, user_data : rawptr ) -> rawptr {
context = memory.client_memory.sokol_context
// block, error := mem_alloc( int(size), allocator = persistent_slab_allocator() )
// ensure(error == AllocatorError.None, "sokol_app allocation failed")
// return block
// TODO(Ed): Implement
return nil
}
sokol_app_free :: proc "c" ( data : rawptr, user_data : rawptr ) {
context = memory.client_memory.sokol_context
// mem_free(data, allocator = persistent_slab_allocator() )
// TODO(Ed): Implement
}
sokol_app_log_callback :: proc "c" (
tag: cstring,
log_level: u32,
log_item_id: u32,
message_or_null: cstring,
line_nr: u32,
filename_or_null: cstring,
user_data: rawptr)
{
context = memory.client_memory.sokol_context
odin_level: LoggerLevel
switch log_level {
case 0: odin_level = .Fatal
case 1: odin_level = .Error
case 2: odin_level = .Warning
case 3: odin_level = .Info
}
clone_backing: [16 * Kilo]byte
cloned_msg: string = "";
if message_or_null != nil {
cloned_msg = cstr_to_str_capped(message_or_null, clone_backing[:])
}
cloned_fname: string = ""
if filename_or_null != nil {
cloned_fname = cstr_to_str_capped(filename_or_null, clone_backing[len(cloned_msg):])
}
cloned_tag := cstr_to_str_capped(tag, clone_backing[len(cloned_msg) + len(cloned_fname):])
log_print_fmt( "%-80s %s::%v", cloned_msg, cloned_tag, line_nr, level = odin_level )
}
// TODO(Ed): Does this need to be queued to a separate thread?
sokol_app_event_callback :: proc "c" (sokol_event: ^sokol_app.Event)
{
context = memory.client_memory.sokol_context
event: InputEvent
using event
_sokol_frame_id = sokol_event.frame_count
frame_id = get_frametime().current_frame
mouse.pos = { sokol_event.mouse_x, sokol_event.mouse_y }
mouse.delta = { sokol_event.mouse_dx, sokol_event.mouse_dy }
switch sokol_event.type
{
case .INVALID:
log_print_fmt("sokol_app - event: INVALID?")
log_print_fmt("%v", sokol_event)
case .KEY_DOWN:
if sokol_event.key_repeat do return
type = .Key_Pressed
key = to_key_from_sokol( sokol_event.key_code )
modifiers = to_modifiers_code_from_sokol( sokol_event.modifiers )
sokol_app.consume_event()
append_staged_input_events( event )
// logf("Key pressed(sokol): %v", key)
// logf("frame (sokol): %v", frame_id )
case .KEY_UP:
if sokol_event.key_repeat do return
type = .Key_Released
key = to_key_from_sokol( sokol_event.key_code )
modifiers = to_modifiers_code_from_sokol( sokol_event.modifiers )
sokol_app.consume_event()
append_staged_input_events( event )
// logf("Key released(sokol): %v", key)
// logf("frame (sokol): %v", frame_id )
case .CHAR:
if sokol_event.key_repeat do return
type = .Unicode
codepoint = transmute(rune) sokol_event.char_code
modifiers = to_modifiers_code_from_sokol( sokol_event.modifiers )
sokol_app.consume_event()
append_staged_input_events( event )
case .MOUSE_DOWN:
type = .Mouse_Pressed
mouse.btn = to_mouse_btn_from_sokol( sokol_event.mouse_button )
modifiers = to_modifiers_code_from_sokol( sokol_event.modifiers )
sokol_app.consume_event()
append_staged_input_events( event )
case .MOUSE_UP:
type = .Mouse_Released
mouse.btn = to_mouse_btn_from_sokol( sokol_event.mouse_button )
modifiers = to_modifiers_code_from_sokol( sokol_event.modifiers )
sokol_app.consume_event()
append_staged_input_events( event )
case .MOUSE_SCROLL:
type = .Mouse_Scroll
mouse.scroll = { sokol_event.scroll_x, sokol_event.scroll_y }
modifiers = to_modifiers_code_from_sokol( sokol_event.modifiers )
sokol_app.consume_event()
append_staged_input_events( event )
case .MOUSE_MOVE:
type = .Mouse_Move
modifiers = to_modifiers_code_from_sokol( sokol_event.modifiers )
sokol_app.consume_event()
append_staged_input_events( event )
case .MOUSE_ENTER:
type = .Mouse_Enter
modifiers = to_modifiers_code_from_sokol( sokol_event.modifiers )
sokol_app.consume_event()
append_staged_input_events( event )
case .MOUSE_LEAVE:
type = .Mouse_Leave
modifiers = to_modifiers_code_from_sokol( sokol_event.modifiers )
sokol_app.consume_event()
append_staged_input_events( event )
// TODO(Ed): Add support
case .TOUCHES_BEGAN:
case .TOUCHES_MOVED:
case .TOUCHES_ENDED:
case .TOUCHES_CANCELLED:
case .RESIZED: sokol_app.consume_event()
case .ICONIFIED: sokol_app.consume_event()
case .RESTORED: sokol_app.consume_event()
case .FOCUSED: sokol_app.consume_event()
case .UNFOCUSED: sokol_app.consume_event()
case .SUSPENDED: sokol_app.consume_event()
case .RESUMED: sokol_app.consume_event()
case .QUIT_REQUESTED: sokol_app.consume_event()
case .CLIPBOARD_PASTED: sokol_app.consume_event()
case .FILES_DROPPED: sokol_app.consume_event()
case .DISPLAY_CHANGED:
log_print_fmt("sokol_app - event: Display changed")
log_print_fmt("refresh rate: %v", sokol_app.refresh_rate())
monitor_refresh_hz := sokol_app.refresh_rate()
sokol_app.consume_event()
}
}
//endregion Sokol App
//region Sokol GFX
sokol_gfx_alloc :: proc "c" ( size : uint, user_data : rawptr ) -> rawptr {
context = memory.client_memory.sokol_context
// block, error := mem_alloc( int(size), allocator = persistent_slab_allocator() )
// ensure(error == AllocatorError.None, "sokol_gfx allocation failed")
// return block
// TODO(Ed): Implement
return nil
}
sokol_gfx_free :: proc "c" ( data : rawptr, user_data : rawptr ) {
context = memory.client_memory.sokol_context
// TODO(Ed): Implement
// free(data, allocator = persistent_slab_allocator() )
}
sokol_gfx_log_callback :: proc "c" (
tag: cstring,
log_level: u32,
log_item_id: u32,
message_or_null: cstring,
line_nr: u32,
filename_or_null: cstring,
user_data: rawptr)
{
context = memory.client_memory.sokol_context
odin_level : LoggerLevel
switch log_level {
case 0: odin_level = .Fatal
case 1: odin_level = .Error
case 2: odin_level = .Warning
case 3: odin_level = .Info
}
clone_backing: [16 * Kilo]byte
cloned_msg : string = ""
if message_or_null != nil {
cloned_msg = cstr_to_str_capped(message_or_null, clone_backing[:])
}
cloned_fname : string = ""
if filename_or_null != nil {
cloned_fname = cstr_to_str_capped(filename_or_null, clone_backing[len(cloned_msg):])
}
cloned_tag := cstr_to_str_capped(tag, clone_backing[len(cloned_msg) + len(cloned_fname):])
log_print_fmt( "%-80s %s::%v", cloned_msg, cloned_tag, line_nr, level = odin_level )
}
//endregion Sokol GFX

View File

@@ -1,22 +1,66 @@
package sectr
HostMemory :: struct {
host_scratch: [256 * Kilo]byte,
import "core:sync"
client_api: ModuleAPI,
client_memory: ^State,
host_api: Host_API,
/*
Everything defined for the host module within the client module
so that the client module has full awareness of relevant host definitions
Client interaction with host is very minimal,
host will only provide the base runtime for client's tick lanes and job system workers.
Host is has all statically (data/bss) defined memory for the application, it will not mess with
client_memory however.
*/
ProcessMemory :: struct {
// Host
host_persist_buf: [32 * Mega]byte,
host_scratch_buf: [64 * Mega]byte,
host_persist: Odin_Arena, // Host Persistent (Non-Wipeable), for bad third-party static object allocation
host_scratch: Odin_Arena, // Host Temporary Wipable
host_api: Host_API, // Client -> Host Interface
// Textual Logging
host_logger: Logger,
path_logger_finalized: string,
// Profiling
spall_context: Spall_Context,
// TODO(Ed): Try out Superluminal's API!
// Multi-threading
threads: [MAX_THREADS](^SysThread), // All threads are tracked here.
job_system: JobSystemContext, // State tracking for job system.
tick_running: b64, // When disabled will lead to shutdown of the process.
tick_lanes: int, // Runtime tracker of live tick lane threads
lane_sync: sync.Barrier, // Used to sync tick lanes during wide junctions.
job_hot_reload_sync: sync.Barrier, // Used to sync jobs with main thread during hot-reload junction.
lane_job_sync: sync.Barrier, // Used to sync tick lanes and job workers during hot-reload.
// Client Module
client_api_hot_reloaded: b64, // Used to signal to threads when hot-reload paths should be taken.
client_api: ModuleAPI, // Host -> Client Interface
client_memory: State,
// Testing
job_group_reload: JobGroup,
job_info_reload: [JOB_TEST_NUM]TestJobInfo,
job_reload: [JOB_TEST_NUM]Job,
}
JOB_TEST_NUM :: 64
Host_API :: struct {
launch_thread: #type proc(),
request_virtual_memory: #type proc(),
request_virtual_mapped_io: #type proc(),
sync_client_module : #type proc(),
request_virtual_memory: #type proc(), // All dynamic allocations will utilize vmem interfaces
request_virtual_mapped_io: #type proc(), // TODO(Ed): Figure out usage constraints of this.
}
ThreadMemory :: struct {
using _: ThreadWorkerContext,
// Per-thread profiling
spall_buffer_backing: [SPALL_BUFFER_DEFAULT_SIZE]byte,
spall_buffer: Spall_Buffer,
client_memory: ThreadState,
}

View File

@@ -1,8 +1,6 @@
package sectr
ThreadProc :: #type proc(data: rawptr)
IgnoredThreads :: bit_set[ 0 ..< 64 ]
JobIgnoredThreads :: bit_set[ WorkerID ]
JobProc :: #type proc(data: rawptr)
@@ -10,20 +8,20 @@ JobGroup :: struct {
counter: u64,
}
JobPriority :: enum {
Medium = 0,
JobPriority :: enum (u32) {
Normal = 0,
Low,
High,
}
Job :: struct {
next: ^Job,
cb: JobProc,
data: rawptr,
// scratch: ^CArena,
group: ^JobGroup,
ignored: IgnoredThreads,
dbg_lbl: string,
next: ^Job,
cb: JobProc,
data: rawptr,
// scratch: ^CArena,
group: ^JobGroup,
ignored: JobIgnoredThreads,
dbg_label: string,
}
JobList :: struct {
@@ -33,17 +31,17 @@ JobList :: struct {
JobSystemContext :: struct {
job_lists: [JobPriority]JobList,
worker_cb: ThreadProc,
worker_data: rawptr,
counter: int,
workers: [] ^ThreadWorkerContext,
running: b32,
// worker_cb: ThreadProc,
// worker_data: rawptr,
worker_num: int,
workers: [THREAD_JOB_WORKERS]^ThreadWorkerContext,
running: b32,
}
ThreadWorkerContext :: struct {
system_ctx: Thread,
index: WorkerID,
}
system_ctx: ^SysThread,
id: WorkerID,
}
WorkerID :: enum int {
Master_Prepper = 0,
@@ -88,7 +86,6 @@ WorkerID :: enum int {
Dereference_Doctorate,
Checkbox_Validator,
Credible_Threat,
Dead_Drop_Delegate,
Deadline_Denialist,
DMA_Desperado,
Dump_Curator,
@@ -98,7 +95,6 @@ WorkerID :: enum int {
Fitness_Unpacker,
Flop_Flipper,
Floating_Point_Propoganda,
Forgets_To_Check,
Global_Guardian,
Ghost_Protocols,
Halting_Solver,
@@ -111,14 +107,10 @@ WorkerID :: enum int {
Implementation_Detailer,
Interrupt_Ignorer,
Interrupt_Insurgent,
Jank_Jockey,
Jefe_De_Errores,
Kickoff_Holiday,
Kilobyte_Kingpin,
Latency_Lover,
Leeroy_Jenkins,
Legacy_Liaison,
Loop_Lobbyist,
Linter_Lamenter,
Low_Hanging_Fruit_Picker,
Malloc_Maverick,
@@ -144,18 +136,15 @@ WorkerID :: enum int {
Pipeline_Plumber,
Pointer_Pilgrim,
Production_Pusher,
Query_Gremlin,
Red_Tape_Renderer,
Resting_Receptionist,
Quantum_Quibbler,
Regex_Rancher,
Register_Riveter,
Register_Spill_Rancher,
Roadmap_Revisionist,
Runtime_Ruffian,
Sabbatical_Scheduler,
Scope_Creep_Shepherd,
Shift_Manager,
Segfault_Stretcher,
Siesta_Scheduler,
Singleton_Sinner,
@@ -164,9 +153,7 @@ WorkerID :: enum int {
Speculative_Skeptic,
Stack_Smuggler,
Techdebt_Treasurer,
Tenured_Trapper,
Triage_Technician,
Tunnel_Fisherman,
Undefined_Behavior_Brokerage,
Unreachable_Utopian,
Unicode_Usurper,
@@ -188,30 +175,59 @@ WorkerID :: enum int {
Zombo_Vistor,
}
// Hard constraint for Windows
JOB_SYSTEM_MAX_WORKER_THREADS :: 64
@(private) div_ceil :: #force_inline proc(a, b: int) -> int { return (a + b - 1) / b }
/*
Threads are setup upfront during the client API's startup.
*/
jobsys_startup :: proc(ctx: ^JobSystemContext, num_workers : int, worker_exec: ThreadProc, worker_data: rawptr) {
ctx^ = {
worker_cb = worker_exec,
worker_data = worker_data,
counter = 1,
}
// Determine number of physical cores
// Allocate worker contextes based on number of physical cores - 1 (main thread managed by host included assumed to be index 0)
//
// num_hw_threads = min(JOB_SYSTEM_MAX_WORKER_THREADS, )
// jobsys_worker_make :
make_job_raw :: proc(group: ^JobGroup, data: rawptr, cb: JobProc, ignored_threads: JobIgnoredThreads = {}, dbg_label: string = "") -> Job {
assert(group != nil)
assert(cb != nil)
return {cb = cb, data = data, group = group, ignored = {}, dbg_label = dbg_label}
}
thread_worker_exec :: proc(_: rawptr) {
job_dispatch_single :: proc(job: ^Job, priority: JobPriority = .Normal) {
assert(job.group != nil)
sync_add(& job.group.counter, 1, .Seq_Cst)
sync_mutex_lock(& memory.job_system.job_lists[priority].mutex)
job.next = memory.job_system.job_lists[priority].head
memory.job_system.job_lists[priority].head = job
sync_mutex_unlock(& memory.job_system.job_lists[priority].mutex)
}
jobsys_shutdown :: proc(ctx: ^JobSystemContext) {
// Note: it's on you to clean up the memory after the jobs if you use a custom allocator.
// dispatch :: proc(priority: Priority = .Medium, jobs: ..Job, allocator := context.temp_allocator) -> []Job {
// _jobs := make([]Job, len(jobs), allocator)
// copy(_jobs, jobs)
// dispatch_jobs(priority, _jobs)
// return _jobs
// }
}
// Push jobs to the queue for the given priority.
// dispatch_jobs :: proc(priority: Priority, jobs: []Job) {
// for &job, i in jobs {
// assert(job.group != nil)
// intrinsics.atomic_add(&job.group.atomic_counter, 1)
// if i < len(jobs) - 1 {
// job._next = &jobs[i + 1]
// }
// }
// sync.atomic_mutex_lock(&_state.job_lists[priority].mutex)
// jobs[len(jobs) - 1]._next = _state.job_lists[priority].head
// _state.job_lists[priority].head = &jobs[0]
// sync.atomic_mutex_unlock(&_state.job_lists[priority].mutex)
// }
// Block the current thread until all jobs in the group are finished.
// Other queued jobs are executed while waiting.
// wait :: proc(group: ^Group) {
// for !group_is_finished(group) {
// try_execute_queued_job()
// }
// group^ = {}
// }
// Check if all jobs in the group are finished.
// @(require_results)
// group_is_finished :: #force_inline proc(group: ^Group) -> bool {
// return intrinsics.atomic_load(&group.atomic_counter) <= 0
// }

View File

@@ -0,0 +1,90 @@
package sectr
InputBindSig :: distinct u128
InputBind :: struct {
keys: [4]KeyCode,
mouse_btns: [4]MouseBtn,
scroll: [2]AnalogAxis,
modifiers: ModifierCodeFlags,
label: string,
}
InputBindStatus :: struct {
detected: b32,
consumed: b32,
frame_id: u64,
}
InputActionProc :: #type proc(user_ptr: rawptr)
InputAction :: struct {
id: int,
user_ptr: rawptr,
cb: InputActionProc,
always: b32,
}
InputContext :: struct {
binds: []InputBind,
status: []InputBindStatus,
onpush_action: []InputAction,
onpop_action: []InputAction,
signature: []InputBindSig,
}
inputbind_signature :: proc(binding: InputBind) -> InputBindSig {
// TODO(Ed): Figure out best hasher for this...
return cast(InputBindSig) 0
}
// Note(Ed): Bindings should be remade for a context when a user modifies any in configuration.
inputcontext_init :: proc(ctx: ^InputContext, binds: []InputBind, onpush: []InputAction = {}, onpop: []InputAction = {}) {
ctx.binds = binds
ctx.onpush_action = onpush
ctx.onpop_action = onpop
for bind, id in ctx.binds {
ctx.signature[id] = inputbind_signature(bind)
}
}
inputcontext_make :: #force_inline proc(binds: []InputBind, onpush: []InputAction = {}, onpop: []InputAction = {}) -> InputContext {
ctx: InputContext; inputcontext_init(& ctx, binds, onpush, onpop); return ctx
}
// Should be called by the user explicitly during frame cleanup.
inputcontext_clear_status :: #force_inline proc "contextless" (ctx: ^InputContext) {
zero(ctx.status)
}
inputbinding_status :: #force_inline proc(id: int) -> InputBindStatus {
return get_input_binds().status[id]
}
inputcontext_inherit :: proc(dst: ^InputContext, src: ^InputContext) {
for dst_id, dst_sig in dst.signature
{
for src_id, src_sig in src.signature
{
if dst_sig != src_sig {
continue
}
dst.status[dst_id] = src.status[src_id]
}
}
}
inputcontext_push :: proc(ctx: ^InputContext, dont_inherit_status: b32 = false) {
// push context stack
// clear binding status for context
// optionally inherit status
// detect status
// Dispatch push actions meeting conditions
}
inputcontext_pop :: proc(ctx: ^InputContext, dont_inherit_status: b32 = false) {
// Dispatch pop actions meeting conditions
// parent inherit consumed statuses
// pop context stack
}

View File

@@ -0,0 +1,286 @@
package sectr
InputEventType :: enum u32 {
Key_Pressed,
Key_Released,
Mouse_Pressed,
Mouse_Released,
Mouse_Scroll,
Mouse_Move,
Mouse_Enter,
Mouse_Leave,
Unicode,
}
InputEvent :: struct
{
frame_id : u64,
type : InputEventType,
key : KeyCode,
modifiers : ModifierCodeFlags,
mouse : struct {
btn : MouseBtn,
pos : V2_F4,
delta : V2_F4,
scroll : V2_F4,
},
codepoint : rune,
// num_touches : u32,
// touches : Touchpoint,
_sokol_frame_id : u64,
}
// TODO(Ed): May just use input event exclusively in the future and have pointers for key and mouse event filters
// I'm on the fence about this as I don't want to force
InputKeyEvent :: struct {
frame_id : u64,
type : InputEventType,
key : KeyCode,
modifiers : ModifierCodeFlags,
}
InputMouseEvent :: struct {
frame_id : u64,
type : InputEventType,
btn : MouseBtn,
pos : V2_F4,
delta : V2_F4,
scroll : V2_F4,
modifiers : ModifierCodeFlags,
}
// Lets see if we need more than this..
InputEvents :: struct {
events : FRingBuffer(InputEvent, 64),
key_events : FRingBuffer(InputKeyEvent, 32),
mouse_events : FRingBuffer(InputMouseEvent, 32),
codes_pressed : Array(rune),
}
// Note(Ed): There is a staged_input_events : Array(InputEvent), in the state.odin's State struct
append_staged_input_events :: #force_inline proc(event: InputEvent) {
append( & memory.client_memory.staged_input_events, event )
}
pull_staged_input_events :: proc( input: ^InputState, using input_events: ^InputEvents, using staged_events : Array(InputEvent) )
{
staged_events_slice := array_to_slice(staged_events)
push( & input_events.events, staged_events_slice )
// using input_events
for event in staged_events_slice
{
switch event.type {
case .Key_Pressed:
push( & key_events, InputKeyEvent {
frame_id = event.frame_id,
type = event.type,
key = event.key,
modifiers = event.modifiers
})
// logf("Key pressed(event pushed): %v", event.key)
// logf("last key event frame: %v", peek_back(& key_events).frame_id)
// logf("last event frame: %v", peek_back(& events).frame_id)
case .Key_Released:
push( & key_events, InputKeyEvent {
frame_id = event.frame_id,
type = event.type,
key = event.key,
modifiers = event.modifiers
})
// logf("Key released(event rpushed): %v", event.key)
// logf("last key event frame: %v", peek_back(& key_events).frame_id)
// logf("last event frame: %v", peek_back(& events).frame_id)
case .Unicode:
append( & codes_pressed, event.codepoint )
case .Mouse_Pressed:
push( & mouse_events, InputMouseEvent {
frame_id = event.frame_id,
type = event.type,
btn = event.mouse.btn,
pos = event.mouse.pos,
delta = event.mouse.delta,
scroll = event.mouse.scroll,
modifiers = event.modifiers,
})
case .Mouse_Released:
push( & mouse_events, InputMouseEvent {
frame_id = event.frame_id,
type = event.type,
btn = event.mouse.btn,
pos = event.mouse.pos,
delta = event.mouse.delta,
scroll = event.mouse.scroll,
modifiers = event.modifiers,
})
case .Mouse_Scroll:
push( & mouse_events, InputMouseEvent {
frame_id = event.frame_id,
type = event.type,
btn = event.mouse.btn,
pos = event.mouse.pos,
delta = event.mouse.delta,
scroll = event.mouse.scroll,
modifiers = event.modifiers,
})
// logf("Detected scroll: %v", event.mouse.scroll)
case .Mouse_Move:
push( & mouse_events, InputMouseEvent {
frame_id = event.frame_id,
type = event.type,
btn = event.mouse.btn,
pos = event.mouse.pos,
delta = event.mouse.delta,
scroll = event.mouse.scroll,
modifiers = event.modifiers,
})
case .Mouse_Enter:
push( & mouse_events, InputMouseEvent {
frame_id = event.frame_id,
type = event.type,
btn = event.mouse.btn,
pos = event.mouse.pos,
delta = event.mouse.delta,
scroll = event.mouse.scroll,
modifiers = event.modifiers,
})
case .Mouse_Leave:
push( & mouse_events, InputMouseEvent {
frame_id = event.frame_id,
type = event.type,
btn = event.mouse.btn,
pos = event.mouse.pos,
delta = event.mouse.delta,
scroll = event.mouse.scroll,
modifiers = event.modifiers,
})
}
}
clear( staged_events )
}
poll_input_events :: proc( input, prev_input : ^InputState, input_events : InputEvents )
{
input.keyboard = {}
input.mouse = {}
// logf("m's value is: %v (prev)", prev_input.keyboard.keys[KeyCode.M] )
for prev_key, id in prev_input.keyboard.keys {
input.keyboard.keys[id].ended_down = prev_key.ended_down
}
for prev_btn, id in prev_input.mouse.btns {
input.mouse.btns[id].ended_down = prev_btn.ended_down
}
input.mouse.raw_pos = prev_input.mouse.raw_pos
input.mouse.pos = prev_input.mouse.pos
input_events := input_events
using input_events
@static prev_frame : u64 = 0
last_frame : u64 = 0
if events.num > 0 {
last_frame = peek_back( events).frame_id
}
// No new events, don't update
if last_frame == prev_frame do return
Iterate_Key_Events:
{
iter_obj := iterator( & key_events ); iter := & iter_obj
for event := next( iter ); event != nil; event = next( iter )
{
// logf("last_frame (iter): %v", last_frame)
// logf("frame (iter): %v", event.frame_id )
if last_frame > event.frame_id {
break
}
key := & input.keyboard.keys[event.key]
prev_key := prev_input.keyboard.keys[event.key]
// logf("key event: %v", event)
first_transition := key.half_transitions == 0
#partial switch event.type {
case .Key_Pressed:
key.half_transitions += 1
key.ended_down = true
case .Key_Released:
key.half_transitions += 1
key.ended_down = false
}
}
}
Iterate_Mouse_Events:
{
iter_obj := iterator( & mouse_events ); iter := & iter_obj
for event := next( iter ); event != nil; event = next( iter )
{
if last_frame > event.frame_id {
break
}
process_digital_btn :: proc( btn : ^DigitalBtn, prev_btn : DigitalBtn, ended_down : b32 )
{
first_transition := btn.half_transitions == 0
btn.half_transitions += 1
btn.ended_down = ended_down
}
// log_print_fmt("mouse event: %v", event)
#partial switch event.type {
case .Mouse_Pressed:
btn := & input.mouse.btns[event.btn]
prev_btn := prev_input.mouse.btns[event.btn]
process_digital_btn( btn, prev_btn, true )
case .Mouse_Released:
btn := & input.mouse.btns[event.btn]
prev_btn := prev_input.mouse.btns[event.btn]
process_digital_btn( btn, prev_btn, false )
case .Mouse_Scroll:
input.mouse.scroll += event.scroll
case .Mouse_Move:
case .Mouse_Enter:
case .Mouse_Leave:
// Handled below
}
input.mouse.raw_pos = event.pos
input.mouse.pos = render_to_screen_pos( event.pos, memory.client_memory.app_window.extent )
input.mouse.delta = event.delta * { 1, -1 }
}
}
prev_frame = last_frame
}
input_event_iter :: #force_inline proc () -> FRingBufferIterator(InputEvent) {
return iterator_ringbuf_fixed( & memory.client_memory.input_events.events )
}
input_key_event_iter :: #force_inline proc() -> FRingBufferIterator(InputKeyEvent) {
return iterator_ringbuf_fixed( & memory.client_memory.input_events.key_events )
}
input_mouse_event_iter :: #force_inline proc() -> FRingBufferIterator(InputMouseEvent) {
return iterator_ringbuf_fixed( & memory.client_memory.input_events.mouse_events )
}
input_codes_pressed_slice :: #force_inline proc() -> []rune {
return to_slice( memory.client_memory.input_events.codes_pressed )
}

View File

@@ -0,0 +1,186 @@
// TODO(Ed) : This if its gets larget can be moved to its own package
package sectr
import "base:runtime"
AnalogAxis :: f32
AnalogStick :: struct {
X, Y : f32
}
DigitalBtn :: struct {
half_transitions : i32,
ended_down : b32,
}
btn_pressed :: #force_inline proc "contextless" (btn: DigitalBtn) -> b32 { return btn.ended_down && btn.half_transitions > 0 }
btn_released :: #force_inline proc "contextless" (btn: DigitalBtn) -> b32 { return btn.ended_down == false && btn.half_transitions > 0 }
MaxMouseBtns :: 16
MouseBtn :: enum u32 {
Left = 0x0,
Middle = 0x1,
Right = 0x2,
Side = 0x3,
Forward = 0x4,
Back = 0x5,
Extra = 0x6,
Invalid = 0x100,
count
}
KeyboardState :: struct #raw_union {
keys : [KeyCode.count] DigitalBtn,
using individual : struct {
null : DigitalBtn, // 0x00
ignored : DigitalBtn, // 0x01
// GFLW / Sokol
menu,
world_1, world_2 : DigitalBtn,
// 0x02 - 0x04
__0x05_0x07_Unassigned__ : [ 3 * size_of( DigitalBtn)] u8,
tab, backspace : DigitalBtn,
// 0x08 - 0x09
right, left, up, down : DigitalBtn,
// 0x0A - 0x0D
enter : DigitalBtn, // 0x0E
__0x0F_Unassigned__ : [ 1 * size_of( DigitalBtn)] u8,
caps_lock,
scroll_lock,
num_lock : DigitalBtn,
// 0x10 - 0x12
left_alt,
left_shift,
left_control,
right_alt,
right_shift,
right_control : DigitalBtn,
// 0x13 - 0x18
print_screen,
pause,
escape,
home,
end,
page_up,
page_down,
space : DigitalBtn,
// 0x19 - 0x20
exlamation,
quote_dbl,
hash,
dollar,
percent,
ampersand,
quote,
paren_open,
paren_close,
asterisk,
plus,
comma,
minus,
period,
slash : DigitalBtn,
// 0x21 - 0x2F
nrow_0, // 0x30
nrow_1, // 0x31
nrow_2, // 0x32
nrow_3, // 0x33
nrow_4, // 0x34
nrow_5, // 0x35
nrow_6, // 0x36
nrow_7, // 0x37
nrow_8, // 0x38
nrow_9, // 0x39
__0x3A_Unassigned__ : [ 1 * size_of(DigitalBtn)] u8,
semicolon,
less,
equals,
greater,
question,
at : DigitalBtn,
A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, V, W, X, Y, Z : DigitalBtn,
bracket_open,
backslash,
bracket_close,
underscore,
backtick : DigitalBtn,
kpad_0,
kpad_1,
kpad_2,
kpad_3,
kpad_4,
kpad_5,
kpad_6,
kpad_7,
kpad_8,
kpad_9,
kpad_decimal,
kpad_equals,
kpad_plus,
kpad_minus,
kpad_multiply,
kpad_divide,
kpad_enter : DigitalBtn,
F1, F2, F3, F4, F5, F6, F7, F8, F9, F10, F11, F12 : DigitalBtn,
insert, delete : DigitalBtn,
F13, F14, F15, F16, F17, F18, F19, F20, F21, F22, F23, F24, F25 : DigitalBtn,
}
}
ModifierCode :: enum u32 {
Shift,
Control,
Alt,
Left_Mouse,
Right_Mouse,
Middle_Mouse,
Left_Shift,
Right_Shift,
Left_Control,
Right_Control,
Left_Alt,
Right_Alt,
}
ModifierCodeFlags :: bit_set[ModifierCode; u32]
MouseState :: struct {
using _ : struct #raw_union {
btns : [16] DigitalBtn,
using individual : struct {
left, middle, right : DigitalBtn,
side, forward, back, extra : DigitalBtn,
}
},
raw_pos, pos, delta : V2_F4,
scroll : [2]AnalogAxis,
}
mouse_world_delta :: #force_inline proc "contextless" (mouse_delta: V2_F4, cam: ^Camera) -> V2_F4 {
return mouse_delta * ( 1 / cam.zoom )
}
InputState :: struct {
keyboard : KeyboardState,
mouse : MouseState,
}

View File

@@ -0,0 +1,84 @@
package sectr
import "base:runtime"
import "core:os"
import "core:c/libc"
import sokol_app "thirdparty:sokol/app"
to_modifiers_code_from_sokol :: proc( sokol_modifiers : u32 ) -> ( modifiers : ModifierCodeFlags )
{
if sokol_modifiers & sokol_app.MODIFIER_SHIFT != 0 do modifiers |= { .Shift }
if sokol_modifiers & sokol_app.MODIFIER_CTRL != 0 do modifiers |= { .Control }
if sokol_modifiers & sokol_app.MODIFIER_ALT != 0 do modifiers |= { .Alt }
if sokol_modifiers & sokol_app.MODIFIER_LMB != 0 do modifiers |= { .Left_Mouse }
if sokol_modifiers & sokol_app.MODIFIER_RMB != 0 do modifiers |= { .Right_Mouse }
if sokol_modifiers & sokol_app.MODIFIER_MMB != 0 do modifiers |= { .Middle_Mouse }
if sokol_modifiers & sokol_app.MODIFIER_LSHIFT != 0 do modifiers |= { .Left_Shift }
if sokol_modifiers & sokol_app.MODIFIER_RSHIFT != 0 do modifiers |= { .Right_Shift }
if sokol_modifiers & sokol_app.MODIFIER_LCTRL != 0 do modifiers |= { .Left_Control }
if sokol_modifiers & sokol_app.MODIFIER_RCTRL != 0 do modifiers |= { .Right_Control }
if sokol_modifiers & sokol_app.MODIFIER_LALT != 0 do modifiers |= { .Left_Alt }
if sokol_modifiers & sokol_app.MODIFIER_RALT != 0 do modifiers |= { .Right_Alt }
return
}
to_key_from_sokol :: proc( sokol_key : sokol_app.Keycode ) -> ( key : KeyCode )
{
world_code_offset :: i32(sokol_app.Keycode.WORLD_1) - i32(KeyCode.world_1)
arrow_code_offset :: i32(sokol_app.Keycode.RIGHT) - i32(KeyCode.right)
func_row_code_offset :: i32(sokol_app.Keycode.F1) - i32(KeyCode.F1)
func_extra_code_offset :: i32(sokol_app.Keycode.F13) - i32(KeyCode.F25)
keypad_num_offset :: i32(sokol_app.Keycode.KP_0) - i32(KeyCode.kpad_0)
switch sokol_key {
case .INVALID ..= .GRAVE_ACCENT : key = transmute(KeyCode) sokol_key
case .WORLD_1, .WORLD_2 : key = transmute(KeyCode) (i32(sokol_key) - world_code_offset)
case .ESCAPE : key = .escape
case .ENTER : key = .enter
case .TAB : key = .tab
case .BACKSPACE : key = .backspace
case .INSERT : key = .insert
case .DELETE : key = .delete
case .RIGHT ..= .UP : key = transmute(KeyCode) (i32(sokol_key) - arrow_code_offset)
case .PAGE_UP : key = .page_up
case .PAGE_DOWN : key = .page_down
case .HOME : key = .home
case .END : key = .end
case .CAPS_LOCK : key = .caps_lock
case .SCROLL_LOCK : key = .scroll_lock
case .NUM_LOCK : key = .num_lock
case .PRINT_SCREEN : key = .print_screen
case .PAUSE : key = .pause
case .F1 ..= .F12 : key = transmute(KeyCode) (i32(sokol_key) - func_row_code_offset)
case .F13 ..= .F25 : key = transmute(KeyCode) (i32(sokol_key) - func_extra_code_offset)
case .KP_0 ..= .KP_9 : key = transmute(KeyCode) (i32(sokol_key) - keypad_num_offset)
case .KP_DECIMAL : key = .kpad_decimal
case .KP_DIVIDE : key = .kpad_divide
case .KP_MULTIPLY : key = .kpad_multiply
case .KP_SUBTRACT : key = .kpad_minus
case .KP_ADD : key = .kpad_plus
case .KP_ENTER : key = .kpad_enter
case .KP_EQUAL : key = .kpad_equals
case .LEFT_SHIFT : key = .left_shift
case .LEFT_CONTROL : key = .left_control
case .LEFT_ALT : key = .left_alt
case .LEFT_SUPER : key = .ignored
case .RIGHT_SHIFT : key = .right_shift
case .RIGHT_CONTROL : key = .right_control
case .RIGHT_ALT : key = .right_alt
case .RIGHT_SUPER : key = .ignored
case .MENU : key = .menu
}
return
}
to_mouse_btn_from_sokol :: proc( sokol_mouse : sokol_app.Mousebutton ) -> ( btn : MouseBtn )
{
switch sokol_mouse {
case .LEFT : btn = .Left
case .MIDDLE : btn = .Middle
case .RIGHT : btn = .Right
case .INVALID : btn = .Invalid
}
return
}

View File

@@ -0,0 +1,239 @@
package sectr
// Based off of SDL2's Scancode; which is based off of:
// https://usb.org/sites/default/files/hut1_12.pdf
// I gutted values I would never use
QeurtyCode :: enum u32 {
unknown = 0,
A = 4,
B = 5,
C = 6,
D = 7,
E = 8,
F = 9,
G = 10,
H = 11,
I = 12,
J = 13,
K = 14,
L = 15,
M = 16,
N = 17,
O = 18,
P = 19,
Q = 20,
R = 21,
S = 22,
T = 23,
U = 24,
V = 25,
W = 26,
X = 27,
Y = 28,
Z = 29,
nrow_1 = 30,
nrow_2 = 31,
nrow_3 = 32,
nrow_4 = 33,
nrow_5 = 34,
nrow_6 = 35,
nrow_7 = 36,
nrow_8 = 37,
nrow_9 = 38,
nrow_0 = 39,
enter = 40,
escape = 41,
backspace = 42,
tab = 43,
space = 44,
minus = 45,
equals = 46,
bracket_open = 47,
bracket_close = 48,
backslash = 49,
NONUSHASH = 50,
semicolon = 51,
apostrophe = 52,
grave = 53,
comma = 54,
period = 55,
slash = 56,
capslock = 57,
F1 = 58,
F2 = 59,
F3 = 60,
F4 = 61,
F5 = 62,
F6 = 63,
F7 = 64,
F8 = 65,
F9 = 66,
F10 = 67,
F11 = 68,
F12 = 69,
// print_screen = 70,
// scroll_lock = 71,
pause = 72,
insert = 73,
home = 74,
page_up = 75,
delete = 76,
end = 77,
page_down = 78,
right = 79,
left = 80,
down = 81,
up = 82,
numlock_clear = 83,
kpad_divide = 84,
kpad_multiply = 85,
kpad_minus = 86,
kpad_plus = 87,
kpad_enter = 88,
kpad_1 = 89,
kpad_2 = 90,
kpad_3 = 91,
kpad_4 = 92,
kpad_5 = 93,
kpad_6 = 94,
kpad_7 = 95,
kpad_8 = 96,
kpad_9 = 97,
kpad_0 = 98,
kpad_period = 99,
// NONUSBACKSLASH = 100,
// OS_Compose = 101,
// power = 102,
kpad_equals = 103,
// F13 = 104,
// F14 = 105,
// F15 = 106,
// F16 = 107,
// F17 = 108,
// F18 = 109,
// F19 = 110,
// F20 = 111,
// F21 = 112,
// F22 = 113,
// F23 = 114,
// F24 = 115,
// execute = 116,
// help = 117,
// menu = 118,
// select = 119,
// stop = 120,
// again = 121,
// undo = 122,
// cut = 123,
// copy = 124,
// paste = 125,
// find = 126,
// mute = 127,
// volume_up = 128,
// volume_down = 129,
/* LOCKINGCAPSLOCK = 130, */
/* LOCKINGNUMLOCK = 131, */
/* LOCKINGSCROLLLOCK = 132, */
// kpad_comma = 133,
// kpad_equals_AS400 = 134,
// international_1 = 135,
// international_2 = 136,
// international_3 = 137,
// international_4 = 138,
// international_5 = 139,
// international_6 = 140,
// international_7 = 141,
// international_8 = 142,
// international_9 = 143,
// lang_1 = 144,
// lang_2 = 145,
// lang_3 = 146,
// lang_4 = 147,
// lang_5 = 148,
// lang_6 = 149,
// lang_7 = 150,
// lang_8 = 151,
// lang_9 = 152,
// alt_erase = 153,
// sysreq = 154,
// cancel = 155,
// clear = 156,
// prior = 157,
// return_2 = 158,
// separator = 159,
// out = 160,
// OPER = 161,
// clear_again = 162,
// CRSEL = 163,
// EXSEL = 164,
// KP_00 = 176,
// KP_000 = 177,
// THOUSANDSSEPARATOR = 178,
// DECIMALSEPARATOR = 179,
// CURRENCYUNIT = 180,
// CURRENCYSUBUNIT = 181,
// KP_LEFTPAREN = 182,
// KP_RIGHTPAREN = 183,
// KP_LEFTBRACE = 184,
// KP_RIGHTBRACE = 185,
// KP_TAB = 186,
// KP_BACKSPACE = 187,
// KP_A = 188,
// KP_B = 189,
// KP_C = 190,
// KP_D = 191,
// KP_E = 192,
// KP_F = 193,
// KP_XOR = 194,
// KP_POWER = 195,
// KP_PERCENT = 196,
// KP_LESS = 197,
// KP_GREATER = 198,
// KP_AMPERSAND = 199,
// KP_DBLAMPERSAND = 200,
// KP_VERTICALBAR = 201,
// KP_DBLVERTICALBAR = 202,
// KP_COLON = 203,
// KP_HASH = 204,
// KP_SPACE = 205,
// KP_AT = 206,
// KP_EXCLAM = 207,
// KP_MEMSTORE = 208,
// KP_MEMRECALL = 209,
// KP_MEMCLEAR = 210,
// KP_MEMADD = 211,
// KP_MEMSUBTRACT = 212,
// KP_MEMMULTIPLY = 213,
// KP_MEMDIVIDE = 214,
// KP_PLUSMINUS = 215,
// KP_CLEAR = 216,
// KP_CLEARENTRY = 217,
// KP_BINARY = 218,
// KP_OCTAL = 219,
// KP_DECIMAL = 220,
// KP_HEXADECIMAL = 221,
left_control = 224,
left_shift = 225,
left_alt = 226,
// LGUI = 227,
right_control = 228,
right_shift = 229,
right_alt = 230,
count = 512,
}

View File

@@ -0,0 +1,168 @@
package sectr
MaxKeyboardKeys :: 512
KeyCode :: enum u32 {
null = 0x00,
ignored = 0x01,
menu = 0x02,
world_1 = 0x03,
world_2 = 0x04,
// 0x05
// 0x06
// 0x07
backspace = '\b', // 0x08
tab = '\t', // 0x09
right = 0x0A,
left = 0x0B,
down = 0x0C,
up = 0x0D,
enter = '\r', // 0x0E
// 0x0F
caps_lock = 0x10,
scroll_lock = 0x11,
num_lock = 0x12,
left_alt = 0x13,
left_shift = 0x14,
left_control = 0x15,
right_alt = 0x16,
right_shift = 0x17,
right_control = 0x18,
print_screen = 0x19,
pause = 0x1A,
escape = '\x1B', // 0x1B
home = 0x1C,
end = 0x1D,
page_up = 0x1E,
page_down = 0x1F,
space = ' ', // 0x20
exclamation = '!', // 0x21
quote_dbl = '"', // 0x22
hash = '#', // 0x23
dollar = '$', // 0x24
percent = '%', // 0x25
ampersand = '&', // 0x26
quote = '\'', // 0x27
paren_open = '(', // 0x28
paren_close = ')', // 0x29
asterisk = '*', // 0x2A
plus = '+', // 0x2B
comma = ',', // 0x2C
minus = '-', // 0x2D
period = '.', // 0x2E
slash = '/', // 0x2F
nrow_0 = '0', // 0x30
nrow_1 = '1', // 0x31
nrow_2 = '2', // 0x32
nrow_3 = '3', // 0x33
nrow_4 = '4', // 0x34
nrow_5 = '5', // 0x35
nrow_6 = '6', // 0x36
nrow_7 = '7', // 0x37
nrow_8 = '8', // 0x38
nrow_9 = '9', // 0x39
// 0x3A
semicolon = ';', // 0x3B
less = '<', // 0x3C
equals = '=', // 0x3D
greater = '>', // 0x3E
question = '?', // 0x3F
at = '@', // 0x40
A = 'A', // 0x41
B = 'B', // 0x42
C = 'C', // 0x43
D = 'D', // 0x44
E = 'E', // 0x45
F = 'F', // 0x46
G = 'G', // 0x47
H = 'H', // 0x48
I = 'I', // 0x49
J = 'J', // 0x4A
K = 'K', // 0x4B
L = 'L', // 0x4C
M = 'M', // 0x4D
N = 'N', // 0x4E
O = 'O', // 0x4F
P = 'P', // 0x50
Q = 'Q', // 0x51
R = 'R', // 0x52
S = 'S', // 0x53
T = 'T', // 0x54
U = 'U', // 0x55
V = 'V', // 0x56
W = 'W', // 0x57
X = 'X', // 0x58
Y = 'Y', // 0x59
Z = 'Z', // 0x5A
bracket_open = '[', // 0x5B
backslash = '\\', // 0x5C
bracket_close = ']', // 0x5D
caret = '^', // 0x5E
underscore = '_', // 0x5F
backtick = '`', // 0x60
kpad_0 = 0x61,
kpad_1 = 0x62,
kpad_2 = 0x63,
kpad_3 = 0x64,
kpad_4 = 0x65,
kpad_5 = 0x66,
kpad_6 = 0x67,
kpad_7 = 0x68,
kpad_8 = 0x69,
kpad_9 = 0x6A,
kpad_decimal = 0x6B,
kpad_equals = 0x6C,
kpad_plus = 0x6D,
kpad_minus = 0x6E,
kpad_multiply = 0x6F,
kpad_divide = 0x70,
kpad_enter = 0x71,
F1 = 0x72,
F2 = 0x73,
F3 = 0x74,
F4 = 0x75,
F5 = 0x76,
F6 = 0x77,
F7 = 0x78,
F8 = 0x79,
F9 = 0x7A,
F10 = 0x7B,
F11 = 0x7C,
F12 = 0x7D,
insert = 0x7E,
delete = 0x7F,
F13 = 0x80,
F14 = 0x81,
F15 = 0x82,
F16 = 0x83,
F17 = 0x84,
F18 = 0x85,
F19 = 0x86,
F20 = 0x87,
F21 = 0x88,
F22 = 0x89,
F23 = 0x8A,
F24 = 0x8B,
F25 = 0x8C,
count = 0x8D,
}

298
code2/sectr/math.odin Normal file
View File

@@ -0,0 +1,298 @@
package sectr
/*
This is heavy work-in-progress personalized math definitions.
Desire is for the definitions to be from a geo alg / clifford alg lens instead of linear alg.
Want to maximize use of optimal linear alg operations in the defs though already defined by odin's linear alg library.
I apologize if this looks terrible my intuiton for math is very sub-par symbolically.
*/
import "base:intrinsics"
import "core:math"
import la "core:math/linalg"
@private IS_NUMERIC :: intrinsics.type_is_numeric
Axis2 :: enum i32 {
Invalid = -1,
X = 0,
Y = 1,
Count,
}
f32_Infinity :: 0x7F800000
f32_Min :: 0x00800000
// Note(Ed) : I don't see an intrinsict available anywhere for this. So I'll be using the Terathon non-sse impl
// Inverse Square Root
// C++ Source https://github.com/EricLengyel/Terathon-Math-Library/blob/main/TSMath.cpp#L191
inverse_sqrt_f32 :: proc "contextless" ( value: f32 ) -> f32 {
if ( value < f32_Min) { return f32_Infinity }
value_u32 := transmute(u32) value
initial_approx := 0x5F375A86 - (value_u32 >> 1)
refined_approx := transmute(f32) initial_approx
// NewtonRaphson method for getting better approximations of square roots
// Done twice for greater accuracy.
refined_approx = refined_approx * (1.5 - value * 0.5 * refined_approx * refined_approx )
refined_approx = refined_approx * (1.5 - value * 0.5 * refined_approx * refined_approx )
// refined_approx = (0.5 * refined_approx) * (3.0 - value * refined_approx * refined_approx)
// refined_approx = (0.5 * refined_approx) * (3.0 - value * refined_approx * refined_approx)
return refined_approx
}
is_power_of_two_u32 :: #force_inline proc "contextless" (value: u32) -> b32 { return value != 0 && ( value & ( value - 1 )) == 0 }
mov_avg_exp_f32 := #force_inline proc "contextless" (alpha, delta_interval, last_value: f32) -> f32 { return (delta_interval * alpha) + (delta_interval * (1.0 - alpha)) }
mov_avg_exp_f64 := #force_inline proc "contextless" (alpha, delta_interval, last_value: f64) -> f64 { return (delta_interval * alpha) + (delta_interval * (1.0 - alpha)) }
Quat_F4 :: quaternion128
V2_S4 :: [2]i32
V3_S4 :: [3]i32
M2_F4 :: matrix [2, 2] f32 // Column Major
R2_F4 :: struct { p0, p1: V2_F4 } // Column Major (they are equivalnet)
UR2_F4 :: distinct R2_F4
r2f4_zero :: R2_F4 {}
r2f4 :: #force_inline proc "contextless" (a, b: V2_F4) -> R2_F4 { return R2_F4{a, b} }
m2f4_from_r2f4 :: #force_inline proc "contextless" (range: R2_F4) -> M2_F4 { return transmute(M2_F4)range }
r2f4_from_m2f4 :: #force_inline proc "contextless" (m: M2_F4) -> R2_F4 { return transmute(R2_F4)m }
add_r2f4 :: #force_inline proc "contextless" (a, b: R2_F4) -> R2_F4 { return r2f4_from_m2f4(m2f4_from_r2f4(a) + m2f4_from_r2f4(b)) }
sub_r2f4 :: #force_inline proc "contextless" (a, b: R2_F4) -> R2_F4 { return r2f4_from_m2f4(m2f4_from_r2f4(a) - m2f4_from_r2f4(b)) }
equal_r2f4 :: #force_inline proc "contextless" (a, b: R2_F4) -> b32 { result := a.p0 == b.p0 && a.p1 == b.p1; return b32(result) }
// Will resolve the largest range possible given a & b.
join_r2f4 :: #force_inline proc "contextless" (a, b: R2_F4) -> (joined : R2_F4) { joined.p0 = min(a.p0, b.p0); joined.p1 = max(a.p1, b.p1); return }
size_r2f4 :: #force_inline proc "contextless" (value: R2_F4) -> V2_F4 { return {abs(value.p1.x - value.p0.x), abs(value.p0.y - value.p1.y) }}
min :: la.min
max :: la.max
sqrt :: la.sqrt
sdot :: la.scalar_dot
vdot :: la.vector_dot
qdot_f2 :: la.quaternion64_dot
qdot_f4 :: la.quaternion128_dot
qdot_f8 :: la.quaternion256_dot
inner_product :: dot
outer_product :: intrinsics.outer_product
cross_s :: la.scalar_cross
cross_v2 :: la.vector_cross2
cross_v3 :: la.vector_cross3
/*
V2_F2: 2D Vector (4-Byte Float) 4D Extension (x, y, z : 0, w : 0)
BV2_F2: 2D Bivector (4-Byte Float)
T2_F2: 3x3 Matrix (4-Byte Float) where 3rd row is always (0, 0, 1)
Rotor2_F4: Rotor 2D (4-Byte Float) s is scalar.
*/
V2_F4 :: [2]f32
BiV2_F4 :: distinct f32
T2_F4 :: matrix [3, 3] f32
UV2_F4 :: distinct V2_F4
Rotor2_F4 :: struct { bv: BiV2_F4, s: f32 }
rotor2f4_to_complex64 :: #force_inline proc "contextless" (rotor: Rotor2_F4) -> complex64 { return transmute(complex64) rotor; }
v2f4_from_f32s :: #force_inline proc "contextless" (x, y: f32 ) -> V2_F4 { return {x, y} }
v2f4_from_scalar :: #force_inline proc "contextless" (scalar: f32 ) -> V2_F4 { return {scalar, scalar}}
v2f4_from_v2s4 :: #force_inline proc "contextless" (v2i: V2_S4) -> V2_F4 { return {f32(v2i.x), f32(v2i.y)}}
v2s4_from_v2f4 :: #force_inline proc "contextless" (v2: V2_F4) -> V2_S4 { return {i32(v2.x), i32(v2.y) }}
/*
PointFlat2 : CGA: 2D flat point (x, y, z)
Line : PGA: 2D line (x, y, z)
*/
P2_F4 :: distinct V2_F4
PF2_F4 :: distinct V3_F4
L2_F4 :: distinct V3_F4
/*
V3_F4: 3D Vector (x, y, z) (3x1) 4D Expression : (x, y, z, 0)
BiV3_F4: 3D Bivector (yz, zx, xy) (3x1)
TriV3_F4: 3D Trivector (xyz) (1x1)
Rotor3: 3D Rotation Versor-Transform (4x1)
Motor3: 3D Rotation & Translation Transform (4x2)
*/
V3_F4 :: [3]f32
V4_F4 :: [4]f32
BiV3_F4 :: struct #raw_union {
using _ : struct { yz, zx, xy : f32 },
using xyz : V3_F4,
}
TriV3_F4 :: distinct f32
Rotor3_F4 :: struct {
using bv: BiV3_F4,
s: f32, // Scalar
}
Shifter3_F4 :: struct {
using bv: BiV3_F4,
s: f32, // Scalar
}
Motor3 :: struct {
rotor: Rotor3_F4,
md: Shifter3_F4,
}
UV3_F4 :: distinct V3_F4
UV4_F4 :: distinct V4_F4
UBiV3_F4 :: distinct BiV3_F4
//region Vec3
v3f4_via_f32s :: #force_inline proc "contextless" (x, y, z: f32) -> V3_F4 { return {x, y, z} }
// complement_vec3 :: #force_inline proc "contextless" ( v : Vec3 ) -> Bivec3 {return transmute(Bivec3) v}
inverse_mag_v3f4 :: #force_inline proc "contextless" (v: V3_F4) -> (result : f32) { square := pow2_v3f4(v); result = inverse_sqrt_f32( square ); return }
magnitude_v3f4 :: #force_inline proc "contextless" (v: V3_F4) -> (mag: f32) { square := pow2_v3f4(v); mag = sqrt(square); return }
normalize_v3f4 :: #force_inline proc "contextless" (v: V3_F4) -> (unit_v: UV3_F4) { unit_v = transmute(UV3_F4) (v * inverse_mag_v3f4(v)); return }
pow2_v3f4 :: #force_inline proc "contextless" (v: V3_F4) -> (s: f32) { return vdot(v, v) }
project_v3f4 :: proc "contextless" (a, b: V3_F4) -> (a_to_b: V3_F4) { panic_contextless("not implemented") }
reject_v3f4 :: proc "contextless" (a, b: V3_F4 ) -> (a_from_b: V3_F4) { panic_contextless("not implemented") }
project_v3f4_uv3f4 :: #force_inline proc "contextless" (v: V3_F4, u: UV3_F4) -> (v_to_u: V3_F4) { inner := vdot(v, v3(u)); v_to_u = v3(u) * inner; return }
project_uv3f4_v3f4 :: #force_inline proc "contextless" (u: UV3_F4, v: V3_F4) -> (u_to_v: V3_F4) { inner := vdot(v3(u), v); u_to_v = v * inner; return }
// Anti-wedge of vectors
regress_v3f4 :: #force_inline proc "contextless" (a, b : V3_F4) -> f32 { return a.x * b.y - a.y * b.x }
reject_v3f4_uv3f4 :: #force_inline proc "contextless" (v: V3_F4, u: UV3_F4) -> ( v_from_u: V3_F4) { inner := vdot(v, v3(u)); v_from_u = (v - v3(u)) * inner; return }
reject_uv3f4_v3f4 :: #force_inline proc "contextless" (v: V3_F4, u: UV3_F4) -> ( u_from_v: V3_F4) { inner := vdot(v3(u), v); u_from_v = (v3(u) - v) * inner; return }
// Combines the deimensions that are present in a & b
wedge_v3f4 :: #force_inline proc "contextless" (a, b: V3_F4) -> (bv : BiV3_F4) {
yzx_zxy := a.yzx * b.zxy
zxy_yzx := a.zxy * b.yzx
bv = transmute(BiV3_F4) (yzx_zxy - zxy_yzx)
return
}
//endregion Vec3
//region Bivec3
biv3f4_via_f32s :: #force_inline proc "contextless" (yz, zx, xy : f32) -> BiV3_F4 {return { xyz = {yz, zx, xy} }}
complement_biv3f4 :: #force_inline proc "contextless" (b : BiV3_F4) -> BiV3_F4 {return transmute(BiV3_F4) b.xyz} // TODO(Ed): Review this.
//region Operations isomoprhic to vectors
negate_biv3f4 :: #force_inline proc "contextless" (b : BiV3_F4) -> BiV3_F4 {return transmute(BiV3_F4) -b.xyz}
add_biv3f4 :: #force_inline proc "contextless" (a, b: BiV3_F4) -> BiV3_F4 {return transmute(BiV3_F4) (a.xyz + b.xyz)}
sub_biv3f4 :: #force_inline proc "contextless" (a, b: BiV3_F4) -> BiV3_F4 {return transmute(BiV3_F4) (a.xyz - b.xyz)}
mul_biv3f4 :: #force_inline proc "contextless" (a, b: BiV3_F4) -> BiV3_F4 {return transmute(BiV3_F4) (a.xyz * b.xyz)}
mul_biv3f4_f32 :: #force_inline proc "contextless" (b: BiV3_F4, s: f32) -> BiV3_F4 {return transmute(BiV3_F4) (b.xyz * s)}
mul_f32_biv3f4 :: #force_inline proc "contextless" (s: f32, b: BiV3_F4) -> BiV3_F4 {return transmute(BiV3_F4) (s * b.xyz)}
div_biv3f4_f32 :: #force_inline proc "contextless" (b: BiV3_F4, s: f32) -> BiV3_F4 {return transmute(BiV3_F4) (b.xyz / s)}
inverse_mag_biv3f4 :: #force_inline proc "contextless" (b: BiV3_F4) -> f32 {return inverse_mag_v3f4(b.xyz)}
magnitude_biv3f4 :: #force_inline proc "contextless" (b: BiV3_F4) -> f32 {return magnitude_v3f4 (b.xyz)}
normalize_biv3f4 :: #force_inline proc "contextless" (b: BiV3_F4) -> UBiV3_F4 {return transmute(UBiV3_F4) normalize_v3f4(b.xyz)}
squared_mag_biv3f4 :: #force_inline proc "contextless" (b: BiV3_F4) -> f32 {return pow2_v3f4(b.xyz)}
//endregion Operations isomoprhic to vectors
// The wedge of a bi-vector in 3D vector space results in a Trivector represented as a scalar.
// This scalar usually resolves to zero with six possible exceptions that lead to the negative volume element.
wedge_biv3f4 :: #force_inline proc "contextless" (a, b: BiV3_F4) -> f32 { s := a.yz + b.yz + a.zx + b.zx + a.xy + b.xy; return s }
// anti-wedge (Combines dimensions that are absent from a & b)
regress_biv3f4 :: #force_inline proc "contextless" (a, b: BiV3_F4) -> V3_F4 {return wedge_v3f4(v3(a), v3(b))}
regress_biv3f4_v3f4 :: #force_inline proc "contextless" (b: BiV3_F4, v: V3_F4) -> f32 {return regress_v3f4(b.xyz, v)}
regress_v3_biv3f4 :: #force_inline proc "contextless" (v: V3_F4, b: BiV3_F4) -> f32 {return regress_v3f4(b.xyz, v)}
//endregion biv3f4
//region Rotor3
rotor3f4_via_comps_f4 :: proc "contextless" (yz, zx, xy, scalar : f32) -> Rotor3_F4 { return Rotor3_F4 {biv3f4_via_f32s(yz, zx, xy), scalar} }
rotor3f4_via_bv_s_f4 :: #force_inline proc "contextless" (bv: BiV3_F4, scalar: f32) -> (rotor : Rotor3_F4) { return Rotor3_F4 {bv, scalar} }
// rotor3f4_via_from_to_v3f4 :: #force_inline proc "contextless" (from, to: V3_F4) -> (rotor : Rotor3_F4) { rotor.scalar := 1 + dot( from, to ); return }
inverse_mag_rotor3f4 :: #force_inline proc "contextless" (rotor : Rotor3_F4) -> (s : f32) { panic_contextless("not implemented") }
magnitude_rotor3f4 :: #force_inline proc "contextless" (rotor : Rotor3_F4) -> (s : f32) { panic_contextless("not implemented") }
squared_mag_f4 :: #force_inline proc "contextless" (rotor : Rotor3_F4) -> (s : f32) { panic_contextless("not implemented") }
reverse_rotor3_f4 :: #force_inline proc "contextless" (rotor : Rotor3_F4) -> (reversed : Rotor3_F4) { reversed = { negate_biv3f4(rotor.bv), rotor.s }; return }
//endregion Rotor3
//region Flat Projective Geometry
Point3_F4 :: distinct V3_F4
PointFlat3_F4 :: distinct V4_F4
Line3_F4 :: struct {
weight: V3_F4,
bulk: BiV3_F4,
}
Plane3_F4 :: distinct V4_F4 // 4D Anti-vector
// aka: wedge operation for points
join_point3_f4 :: proc "contextless" (p, q : Point3_F4) -> (l : Line3_F4) {
weight := v3(q) - v3(p)
bulk := wedge(v3(p), v3(q))
l = {weight, bulk}
return
}
join_pointflat3_f4 :: proc "contextless" (p, q : PointFlat3_F4) -> (l : Line3_F4) {
weight := v3f4(
p.w * q.x - p.x * q.w,
p.w * q.y - p.y * q.w,
p.w * q.z - p.z * q.w
)
bulk := wedge(v3(p), v3(q))
l = { weight, bulk}
return
}
sub_point3_f4 :: #force_inline proc "contextless" (a, b : Point3_F4) -> (v : V3_F4) { v = v3f4(a) - v3f4(b); return }
//endregion Flat Projective Geometry
//region Rational Trig
quadrance :: #force_inline proc "contextless" (a, b: Point3_F4) -> (q : f32) { q = pow2_v3f4(v3(a) - v3(b)); return }
// Assumes the weight component is normalized.
spread :: #force_inline proc "contextless" (l, m: Line3_F4) -> (s : f32) { s = vdot(l.weight, m.weight); return }
//endregion Rational Trig
//region Grime
// A dump of equivalent symbol generatioon (because the toolchain can't do it yet)
// Symbol alias tables are in grim.odin
v3f4_to_biv3f4 :: #force_inline proc "contextless" (v: V3_F4) -> BiV3_F4 {return transmute(BiV3_F4) v }
biv3f4_to_v3f4 :: #force_inline proc "contextless" (bv: BiV3_F4) -> V3_F4 {return transmute(V3_F4) bv }
quatf4_from_rotor3f4 :: #force_inline proc "contextless" (rotor: Rotor3_F4) -> Quat_F4 {return transmute(Quat_F4) rotor }
uv3f4_to_v3f4 :: #force_inline proc "contextless" (v: UV3_F4) -> V3_F4 {return transmute(V3_F4) v }
uv4f4_to_v4f4 :: #force_inline proc "contextless" (v: UV4_F4) -> V4_F4 {return transmute(V4_F4) v }
// plane_to_v4f4 :: #force_inline proc "contextless" (p : Plane3_F4) -> V4_F4 {return transmute(V4_F4) p}
point3f4_to_v3f4 :: #force_inline proc "contextless" (p: Point3_F4) -> V3_F4 {return {p.x, p.y, p.z} }
pointflat3f4_to_v3f4 :: #force_inline proc "contextless" (p: PointFlat3_F4) -> V3_F4 {return {p.x, p.y, p.z} }
v3f4_to_point3f4 :: #force_inline proc "contextless" (v: V3_F4) -> Point3_F4 {return {v.x, v.y, v.z} }
cross_v3f4_uv3f4 :: #force_inline proc "contextless" (v: V3_F4, u: UV3_F4) -> V3_F4 {return cross_v3(v, transmute(V3_F4) u)}
cross_u3f4_v3f4 :: #force_inline proc "contextless" (u: UV3_F4, v: V3_F4) -> V3_F4 {return cross_v3(transmute(V3_F4) u, v)}
dot_v3f4_uv3f4 :: #force_inline proc "contextless" (v: V3_F4, unit_v: UV3_F4) -> f32 {return vdot(v, transmute(V3_F4) unit_v)}
dot_uv3f4_v3f4 :: #force_inline proc "contextless" (unit_v: UV3_F4, v: V3_F4) -> f32 {return vdot(v, transmute(V3_F4) unit_v)}
wedge_v3f4_uv3f4 :: #force_inline proc "contextless" (v : V3_F4, unit_v: UV3_F4) -> BiV3_F4 {return wedge_v3f4(v, v3(unit_v))}
wedge_uv3f4_vs :: #force_inline proc "contextless" (unit_v: UV3_F4, v: V3_F4) -> BiV3_F4 {return wedge_v3f4(v3(unit_v), v)}
//endregion Grime

View File

@@ -1,12 +1,312 @@
package sectr
import "core:sync"
AtomicMutex :: sync.Atomic_Mutex
/*
All direct non-codebase package symbols should do zero allocations.
Any symbol that does must be mapped from the Grime package to properly tirage its allocator to odin's ideomatic interface.
*/
import "core:thread"
Thread :: thread.Thread
import "base:intrinsics"
debug_trap :: intrinsics.debug_trap
import "base:runtime"
Context :: runtime.Context
import "core:dynlib"
// Only referenced in ModuleAPI
DynLibrary :: dynlib.Library
import "core:log"
LoggerLevel :: log.Level
import "core:mem"
AllocatorError :: mem.Allocator_Error
// Used strickly for the logger
Odin_Arena :: mem.Arena
odin_arena_allocator :: mem.arena_allocator
import "core:os"
FileTime :: os.File_Time
process_exit :: os.exit
import "core:prof/spall"
SPALL_BUFFER_DEFAULT_SIZE :: spall.BUFFER_DEFAULT_SIZE
Spall_Context :: spall.Context
Spall_Buffer :: spall.Buffer
import "core:sync"
AtomicMutex :: sync.Atomic_Mutex
barrier_wait :: sync.barrier_wait
sync_store :: sync.atomic_store_explicit
sync_load :: sync.atomic_load_explicit
sync_add :: sync.atomic_add_explicit
sync_sub :: sync.atomic_sub_explicit
sync_mutex_lock :: sync.atomic_mutex_lock
sync_mutex_unlock :: sync.atomic_mutex_unlock
sync_mutex_try_lock :: sync.atomic_mutex_try_lock
import threading "core:thread"
SysThread :: threading.Thread
ThreadProc :: threading.Thread_Proc
thread_create :: threading.create
thread_start :: threading.start
import "core:time"
Millisecond :: time.Millisecond
Duration :: time.Duration
Tick :: time.Tick
duration_ms :: time.duration_milliseconds
duration_seconds :: time.duration_seconds
thread_sleep :: time.sleep
tick_lap_time :: time.tick_lap_time
tick_now :: time.tick_now
import "codebase:grime"
ensure :: grime.ensure
fatal :: grime.fatal
verify :: grime.verify
Array :: grime.Array
array_to_slice :: grime.array_to_slice
array_append_array :: grime.array_append_array
array_append_slice :: grime.array_append_slice
array_append_value :: grime.array_append_value
array_back :: grime.array_back
array_clear :: grime.array_clear
// Logging
Logger :: grime.Logger
logger_init :: grime.logger_init
// Memory
mem_alloc :: grime.mem_alloc
mem_copy_overlapping :: grime.mem_copy_overlapping
mem_copy :: grime.mem_copy
mem_zero :: grime.mem_zero
slice_zero :: grime.slice_zero
// Ring Buffer
FRingBuffer :: grime.FRingBuffer
FRingBufferIterator :: grime.FRingBufferIterator
ringbuf_fixed_peak_back :: grime.ringbuf_fixed_peak_back
ringbuf_fixed_push :: grime.ringbuf_fixed_push
ringbuf_fixed_push_slice :: grime.ringbuf_fixed_push_slice
iterator_ringbuf_fixed :: grime.iterator_ringbuf_fixed
next_ringbuf_fixed_iterator :: grime.next_ringbuf_fixed_iterator
// Strings
cstr_to_str_capped :: grime.cstr_to_str_capped
to_odin_logger :: grime.to_odin_logger
// Operating System
set__scheduler_granularity :: grime.set__scheduler_granularity
// grime_set_profiler_module_context :: grime.set_profiler_module_context
// grime_set_profiler_thread_buffer :: grime.set_profiler_thread_buffer
Kilo :: 1024
Mega :: Kilo * 1024
Giga :: Mega * 1024
Tera :: Giga * 1024
// chrono
NS_To_MS :: grime.NS_To_MS
NS_To_US :: grime.NS_To_US
NS_To_S :: grime.NS_To_S
US_To_NS :: grime.US_To_NS
US_To_MS :: grime.US_To_MS
US_To_S :: grime.US_To_S
MS_To_NS :: grime.MS_To_NS
MS_To_US :: grime.MS_To_US
MS_To_S :: grime.MS_To_S
S_To_NS :: grime.S_To_NS
S_To_US :: grime.S_To_US
S_To_MS :: grime.S_To_MS
// ensure :: #force_inline proc( condition : b32, msg : string, location := #caller_location ) {
// if condition do return
// log_print( msg, LoggerLevel.Warning, location )
// debug_trap()
// }
// // TODO(Ed) : Setup exit codes!
// fatal :: #force_inline proc( msg : string, exit_code : int = -1, location := #caller_location ) {
// log_print( msg, LoggerLevel.Fatal, location )
// debug_trap()
// process_exit( exit_code )
// }
// // TODO(Ed) : Setup exit codes!
// verify :: #force_inline proc( condition : b32, msg : string, exit_code : int = -1, location := #caller_location ) {
// if condition do return
// log_print( msg, LoggerLevel.Fatal, location )
// debug_trap()
// process_exit( exit_code )
// }
log_print :: proc( msg : string, level := LoggerLevel.Info, loc := #caller_location ) {
context.allocator = odin_arena_allocator(& memory.host_scratch)
context.temp_allocator = odin_arena_allocator(& memory.host_scratch)
log.log( level, msg, location = loc )
}
log_print_fmt :: proc( fmt : string, args : ..any, level := LoggerLevel.Info, loc := #caller_location ) {
context.allocator = odin_arena_allocator(& memory.host_scratch)
context.temp_allocator = odin_arena_allocator(& memory.host_scratch)
log.logf( level, fmt, ..args, location = loc )
}
@(deferred_none = profile_end, disabled = DISABLE_CLIENT_PROFILING)
profile :: #force_inline proc "contextless" ( name : string, loc := #caller_location ) {
spall._buffer_begin( & memory.spall_context, & thread.spall_buffer, name, "", loc )
}
@(disabled = DISABLE_CLIENT_PROFILING)
profile_begin :: #force_inline proc "contextless" ( name : string, loc := #caller_location ) {
spall._buffer_begin( & memory.spall_context, & thread.spall_buffer, name, "", loc )
}
@(disabled = DISABLE_CLIENT_PROFILING)
profile_end :: #force_inline proc "contextless" () {
spall._buffer_end( & memory.spall_context, & thread.spall_buffer)
}
// Procedure Mappings
add :: proc {
add_r2f4,
add_biv3f4,
}
append :: proc {
array_append_array,
array_append_slice,
array_append_value,
}
array_append :: proc {
array_append_array,
array_append_slice,
array_append_value,
}
biv3f4 :: proc {
biv3f4_via_f32s,
v3f4_to_biv3f4,
}
bivec :: biv3f4
clear :: proc {
array_clear,
}
cross :: proc {
cross_s,
cross_v2,
cross_v3,
cross_v3f4_uv3f4,
cross_u3f4_v3f4,
}
div :: proc {
div_biv3f4_f32,
}
dot :: proc {
sdot,
vdot,
qdot_f2,
qdot_f4,
qdot_f8,
dot_v3f4_uv3f4,
dot_uv3f4_v3f4,
}
equal :: proc {
equal_r2f4,
}
is_power_of_two :: proc {
is_power_of_two_u32,
// is_power_of_two_uintptr,
}
iterator :: proc {
iterator_ringbuf_fixed,
}
mov_avg_exp :: proc {
mov_avg_exp_f32,
mov_avg_exp_f64,
}
mul :: proc {
mul_biv3f4,
mul_biv3f4_f32,
mul_f32_biv3f4,
}
join :: proc {
join_r2f4,
}
inverse_sqrt :: proc {
inverse_sqrt_f32,
}
next :: proc {
next_ringbuf_fixed_iterator,
}
point3 :: proc {
v3f4_to_point3f4,
}
pow2 :: proc {
pow2_v3f4,
}
peek_back :: proc {
ringbuf_fixed_peak_back,
}
push :: proc {
ringbuf_fixed_push,
ringbuf_fixed_push_slice,
}
quatf4 :: proc {
quatf4_from_rotor3f4,
}
regress :: proc {
regress_biv3f4,
}
rotor3 :: proc {
rotor3f4_via_comps_f4,
rotor3f4_via_bv_s_f4,
// rotor3f4_via_from_to_v3f4,
}
size :: proc {
size_r2f4,
}
sub :: proc {
sub_r2f4,
sub_biv3f4,
// join_point3_f4,
// join_pointflat3_f4,
}
to_slice :: proc {
array_to_slice,
}
v2f4 :: proc {
v2f4_from_f32s,
v2f4_from_scalar,
v2f4_from_v2s4,
v2s4_from_v2f4,
}
v3f4 :: proc {
v3f4_via_f32s,
biv3f4_to_v3f4,
point3f4_to_v3f4,
pointflat3f4_to_v3f4,
uv3f4_to_v3f4,
}
v2 :: proc {
v2f4_from_f32s,
v2f4_from_scalar,
v2f4_from_v2s4,
v2s4_from_v2f4,
}
v3 :: proc {
v3f4_via_f32s,
biv3f4_to_v3f4,
point3f4_to_v3f4,
pointflat3f4_to_v3f4,
uv3f4_to_v3f4,
}
v4 :: proc {
uv4f4_to_v4f4,
}
wedge :: proc {
wedge_v3f4,
wedge_biv3f4,
}
zero :: proc {
mem_zero,
slice_zero,
}

145
code2/sectr/space.odin Normal file
View File

@@ -0,0 +1,145 @@
package sectr
/* Space
Provides various definitions for converting from one standard of measurement to another.
Provides constructs and transformations in reguards to space.
Ultimately the user's window ppcm (pixels-per-centimeter) determins how all virtual metric conventions are handled.
*/
// The points to pixels and pixels to points are our only reference to accurately converting
// an object from world space to screen-space.
// This prototype engine will have all its spacial unit base for distances in virtual pixels.
Inches_To_CM :: cast(f32) 2.54
Points_Per_CM :: cast(f32) 28.3465
CM_Per_Point :: cast(f32) 1.0 / DPT_DPCM
CM_Per_Pixel :: cast(f32) 1.0 / DPT_PPCM
DPT_DPCM :: cast(f32) 72.0 * Inches_To_CM // 182.88 points/dots per cm
DPT_PPCM :: cast(f32) 96.0 * Inches_To_CM // 243.84 pixels per cm
when ODIN_OS == .Windows {
op_default_dpcm :: 72.0 * Inches_To_CM
os_default_ppcm :: 96.0 * Inches_To_CM
// 1 inch = 2.54 cm, 96 inch * 2.54 = 243.84 DPCM
}
//region Unit Conversion Impl
// cm_to_points :: proc( cm : f32 ) -> f32 {
// }
// points_to_cm :: proc( points : f32 ) -> f32 {
// screen_dpc := get_state().app_window.dpc
// cm_per_pixel := 1.0 / screen_dpc
// pixels := points * DPT_DPC * cm_per_pixel
// return points *
// }
f32_cm_to_pixels :: #force_inline proc "contextless"(cm, screen_ppcm: f32) -> f32 { return cm * screen_ppcm }
f32_pixels_to_cm :: #force_inline proc "contextless"(pixels, screen_ppcm: f32) -> f32 { return pixels * (1.0 / screen_ppcm) }
f32_points_to_pixels :: #force_inline proc "contextless"(points, screen_ppcm: f32) -> f32 { return points * DPT_PPCM * (1.0 / screen_ppcm) }
f32_pixels_to_points :: #force_inline proc "contextless"(pixels, screen_ppcm: f32) -> f32 { return pixels * (1.0 / screen_ppcm) * Points_Per_CM }
v2f4_cm_to_pixels :: #force_inline proc "contextless"(v: V2_F4, screen_ppcm: f32) -> V2_F4 { return v * screen_ppcm }
v2f4_pixels_to_cm :: #force_inline proc "contextless"(v: V2_F4, screen_ppcm: f32) -> V2_F4 { return v * (1.0 / screen_ppcm) }
v2f4_points_to_pixels :: #force_inline proc "contextless"(vpoints: V2_F4, screen_ppcm: f32) -> V2_F4 { return vpoints * DPT_PPCM * (1.0 / screen_ppcm) }
r2f4_cm_to_pixels :: #force_inline proc "contextless"(range: R2_F4, screen_ppcm: f32) -> R2_F4 { return R2_F4 { range.p0 * screen_ppcm, range.p1 * screen_ppcm } }
range2_pixels_to_cm :: #force_inline proc "contextless"(range: R2_F4, screen_ppcm: f32) -> R2_F4 { cm_per_pixel := 1.0 / screen_ppcm; return R2_F4 { range.p0 * cm_per_pixel, range.p1 * cm_per_pixel } }
// vec2_points_to_cm :: proc( vpoints : Vec2 ) -> Vec2 {
// }
//endregion Unit Conversion Impl
AreaSize :: V2_F4
Bounds2 :: struct {
top_left, bottom_right: V2_F4,
}
BoundsCorners2 :: struct {
top_left, top_right, bottom_left, bottom_right: V2_F4,
}
E2_F4 :: V2_F4
E2_S4 :: V2_F4
WS_Pos :: struct {
tile_id : V2_S4,
rel : V2_F4,
}
Camera :: struct {
view : E2_F4,
position : V2_F4,
zoom : f32,
}
Camera_Default := Camera { zoom = 1 }
CameraZoomMode :: enum u32 {
Digital,
Smooth,
}
Extents2_F4 :: V2_F4
Extents2_S4 :: V2_S4
bounds2_radius :: #force_inline proc "contextless" (bounds: Bounds2) -> f32 { return max( bounds.bottom_right.x, bounds.top_left.y ) }
extent_from_size :: #force_inline proc "contextless" (size: AreaSize) -> Extents2_F4 { return transmute(Extents2_F4) (size * 2.0) }
screen_size :: #force_inline proc "contextless" (screen_extent: Extents2_F4) -> AreaSize { return transmute(AreaSize) (screen_extent * 2.0) }
screen_get_bounds :: #force_inline proc "contextless" (screen_extent: Extents2_F4) -> R2_F4 { return R2_F4 { { -screen_extent.x, -screen_extent.y} /*bottom_left*/, { screen_extent.x, screen_extent.y} /*top_right*/ } }
screen_get_corners :: #force_inline proc "contextless"(screen_extent: Extents2_F4) -> BoundsCorners2 { return {
top_left = { -screen_extent.x, screen_extent.y },
top_right = { screen_extent.x, screen_extent.y },
bottom_left = { -screen_extent.x, -screen_extent.y },
bottom_right = { screen_extent.x, -screen_extent.y },
}}
view_get_bounds :: #force_inline proc "contextless"(cam: Camera, screen_extent: Extents2_F4) -> R2_F4 {
cam_zoom_ratio := 1.0 / cam.zoom
bottom_left := V2_F4 { -screen_extent.x, -screen_extent.y}
top_right := V2_F4 { screen_extent.x, screen_extent.y}
bottom_left = screen_to_ws_view_pos(bottom_left, cam.position, cam.zoom)
top_right = screen_to_ws_view_pos(top_right, cam.position, cam.zoom)
return R2_F4{bottom_left, top_right}
}
view_get_corners :: #force_inline proc "contextless"(cam: Camera, screen_extent: Extents2_F4) -> BoundsCorners2 {
cam_zoom_ratio := 1.0 / cam.zoom
zoomed_extent := screen_extent * cam_zoom_ratio
top_left := cam.position + V2_F4 { -zoomed_extent.x, zoomed_extent.y }
top_right := cam.position + V2_F4 { zoomed_extent.x, zoomed_extent.y }
bottom_left := cam.position + V2_F4 { -zoomed_extent.x, -zoomed_extent.y }
bottom_right := cam.position + V2_F4 { zoomed_extent.x, -zoomed_extent.y }
return { top_left, top_right, bottom_left, bottom_right }
}
render_to_screen_pos :: #force_inline proc "contextless" (pos: V2_F4, screen_extent: Extents2_F4) -> V2_F4 { return V2_F4 { pos.x - screen_extent.x, (pos.y * -1) + screen_extent.y } }
render_to_ws_view_pos :: #force_inline proc "contextless" (pos: V2_F4) -> V2_F4 { return {} } //TODO(Ed): Implement?
screen_to_ws_view_pos :: #force_inline proc "contextless" (pos: V2_F4, cam_pos: V2_F4, cam_zoom: f32, ) -> V2_F4 { return pos * (/*Camera Zoom Ratio*/1.0 / cam_zoom) - cam_pos } // TODO(Ed): Doesn't take into account view extent.
screen_to_render_pos :: #force_inline proc "contextless" (pos: V2_F4, screen_extent: Extents2_F4) -> V2_F4 { return pos + screen_extent } // Centered screen space to conventional screen space used for rendering
// TODO(Ed): These should assume a cam_context or have the ability to provide it in params
ws_view_extent :: #force_inline proc "contextless" (cam_view: Extents2_F4, cam_zoom: f32) -> Extents2_F4 { return cam_view * (/*Camera Zoom Ratio*/1.0 / cam_zoom) }
ws_view_to_screen_pos :: #force_inline proc "contextless" (ws_pos : V2_F4, cam: Camera) -> V2_F4 {
// Apply camera transformation
view_pos := (ws_pos - cam.position) * cam.zoom
// TODO(Ed): properly take into account cam.view
screen_pos := view_pos
return screen_pos
}
ws_view_to_render_pos :: #force_inline proc "contextless"(position: V2_F4, cam: Camera, screen_extent: Extents2_F4) -> V2_F4 {
extent_offset: V2_F4 = { screen_extent.x, screen_extent.y } * { 1, 1 }
position := V2_F4 { position.x, position.y }
cam_offset := V2_F4 { cam.position.x, cam.position.y }
return extent_offset + (position + cam_offset) * cam.zoom
}
// Workspace view to screen space position (zoom agnostic)
// TODO(Ed): Support a position which would not be centered on the screen if in a viewport
ws_view_to_screen_pos_no_zoom :: #force_inline proc "contextless"(position: V2_F4, cam: Camera) -> V2_F4 {
cam_zoom_ratio := 1.0 / cam.zoom
return { position.x, position.y } * cam_zoom_ratio
}
// Workspace view to render space position (zoom agnostic)
// TODO(Ed): Support a position which would not be centered on the screen if in a viewport
ws_view_to_render_pos_no_zoom :: #force_inline proc "contextless"(position: V2_F4, cam: Camera) -> V2_F4 {
cam_zoom_ratio := 1.0 / cam.zoom
return { position.x, position.y } * cam_zoom_ratio
}

View File

@@ -1,8 +1,114 @@
package sectr
//region STATIC MEMORY
// This should be the only global on client module side.
host_memory: ^HostMemory
@(private) memory: ^ProcessMemory
@(private, thread_local) thread: ^ThreadMemory
//endregion STATIC MEMORy
MemoryConfig :: struct {
reserve_persistent : uint,
reserve_frame : uint,
reserve_transient : uint,
reserve_filebuffer : uint,
commit_initial_persistent : uint,
commit_initial_frame : uint,
commit_initial_transient : uint,
commit_initial_filebuffer : uint,
}
// All nobs available for this application
AppConfig :: struct {
using memory : MemoryConfig,
resolution_width : uint,
resolution_height : uint,
refresh_rate : uint,
cam_min_zoom : f32,
cam_max_zoom : f32,
cam_zoom_mode : CameraZoomMode,
cam_zoom_smooth_snappiness : f32,
cam_zoom_sensitivity_smooth : f32,
cam_zoom_sensitivity_digital : f32,
cam_zoom_scroll_delta_scale : f32,
engine_refresh_hz : uint,
timing_fps_moving_avg_alpha : f32,
ui_resize_border_width : f32,
// color_theme : AppColorTheme,
text_snap_glyph_shape_position : b32,
text_snap_glyph_render_height : b32,
text_size_screen_scalar : f32,
text_size_canvas_scalar : f32,
text_alpha_sharpen : f32,
}
AppWindow :: struct {
extent: Extents2_F4, // Window half-size
dpi_scale: f32, // Dots per inch scale (provided by raylib via glfw)
ppcm: f32, // Dots per centimetre
resized: b32, // Extent changed this frame
}
FrameTime :: struct {
sleep_is_granular : b32,
current_frame : u64,
delta_seconds : f64,
delta_ms : f64,
delta_ns : Duration,
target_ms : f64,
elapsed_ms : f64,
avg_ms : f64,
fps_avg : f64,
}
State :: struct {
job_system: JobSystemContext,
sokol_frame_count: i64,
sokol_context: Context,
config: AppConfig,
app_window: AppWindow,
logger: Logger,
// Overall frametime of the tick frame (currently main thread's)
using frametime : FrameTime,
input_data : [2]InputState,
input_prev : ^InputState,
input : ^InputState, // TODO(Ed): Rename to indicate its the device's signal state for the frame?
input_events: InputEvents,
input_binds_stack: Array(InputContext),
// Note(Ed): Do not modify directly, use its interface in app/event.odin
staged_input_events : Array(InputEvent),
// TODO(Ed): Add a multi-threaded guard for accessing or mutating staged_input_events.
}
ThreadState :: struct {
// Frametime
delta_seconds: f64,
delta_ms: f64,
delta_ns: Duration,
target_ms: f64, // NOTE(Ed): This can only be used on job worker threads.
elapsed_ms: f64,
avg_ms: f64,
}
app_config :: #force_inline proc "contextless" () -> AppConfig { return memory.client_memory.config }
get_frametime :: #force_inline proc "contextless" () -> FrameTime { return memory.client_memory.frametime }
// get_state :: #force_inline proc "contextless" () -> ^State { return memory.client_memory }
get_input_binds :: #force_inline proc "contextless" () -> InputContext { return array_back (memory.client_memory.input_binds_stack) }
get_input_binds_stack :: #force_inline proc "contextless" () -> []InputContext { return array_to_slice(memory.client_memory.input_binds_stack) }

View File

@@ -97,6 +97,7 @@ $flag_radlink = '-radlink'
$flag_sanitize_address = '-sanitize:address'
$flag_sanitize_memory = '-sanitize:memory'
$flag_sanitize_thread = '-sanitize:thread'
$flag_show_definables = '-show-defineables'
$flag_subsystem = '-subsystem:'
$flag_show_debug_messages = '-show-debug-messages'
$flag_show_timings = '-show-timings'
@@ -233,6 +234,7 @@ push-location $path_root
# $build_args += $flag_sanitize_address
# $build_args += $flag_sanitize_memory
# $build_args += $flag_show_debug_messages
$build_args += $flag_show_definabless
$build_args += $flag_show_timings
# $build_args += $flag_build_diagnostics
# TODO(Ed): Enforce nil default allocator
@@ -318,6 +320,7 @@ push-location $path_root
# $build_args += $flag_sanitize_address
# $build_args += $flag_sanitize_memory
# $build_args += $flag_build_diagnostics
$build_args += $flag_show_definabless
# TODO(Ed): Enforce nil default allocator
# foreach ($arg in $build_args) {

View File

@@ -12,6 +12,8 @@ $url_odin_repo = 'https://github.com/Ed94/Odin.git'
$url_sokol = 'https://github.com/Ed94/sokol-odin.git'
$url_sokol_tools = 'https://github.com/floooh/sokol-tools-bin.git'
# TODO(Ed): https://github.com/karl-zylinski/odin-handle-map
$path_harfbuzz = join-path $path_thirdparty 'harfbuzz'
$path_ini_parser = join-path $path_thirdparty 'ini'
$path_odin = join-path $path_toolchain 'Odin'