mirror of
https://github.com/Ed94/gencpp.git
synced 2024-12-22 07:44:45 -08:00
Compare commits
5 Commits
71b7320e1c
...
8d48da0b9e
Author | SHA1 | Date | |
---|---|---|---|
8d48da0b9e | |||
30dea2e9fd | |||
633879d35f | |||
831b52129d | |||
55427822a0 |
35
Readme.md
35
Readme.md
@ -5,40 +5,37 @@ An attempt at simple staged metaprogramming for C/C++.
|
||||
The library API is a composition of code element constructors, and a non-standards-compliant single-pass C/C++ parser.
|
||||
These build up a code AST to then serialize with a file builder, or can be traversed for staged-reflection of C/C++ code.
|
||||
|
||||
This code base attempts follow the [handmade philosophy](https://handmade.network/manifesto).
|
||||
Its not meant to be a black box metaprogramming utility, it should be easy to intergrate into a user's project domain.
|
||||
This code base attempts follow the [handmade philosophy](https://handmade.network/manifesto).
|
||||
Its not meant to be a black box metaprogramming utility, it should be easy to integrate into a user's project domain.
|
||||
|
||||
## Documentation
|
||||
|
||||
* [docs - General](./docs/Readme.md): Overview and additional docs
|
||||
* [AST_Design](./docs/AST_Design.md): Overvie of ASTs
|
||||
* [AST_Design](./docs/AST_Design.md): Overview of ASTs
|
||||
* [AST Types](./docs/AST_Types.md): Listing of all AST types along with their Code type interface.
|
||||
* [Parsing](./docs/Parsing.md): Overview of the parsing interface.
|
||||
* [Parser Algo](./docs/Parser_Algo.md): In-depth breakdown of the parser's implementation.
|
||||
* [base](./base/Readme.md): Essential (base) library.
|
||||
* [gen_c_library](./gen_c_library/): C11 library variant generation (single header and segmeented).
|
||||
* [gen_segmented](./gen_segmented/): Segemented C++ (`gen.<hpp/cpp>`, `gen.dep.<hpp/cpp>`) generation
|
||||
* [gen_c_library](./gen_c_library/): C11 library variant generation (single header and segmented).
|
||||
* [gen_segmented](./gen_segmented/): Segmented C++ (`gen.<hpp/cpp>`, `gen.dep.<hpp/cpp>`) generation
|
||||
* [gen_singleheader](./gen_singleheader/): Singlehader C++ generation `gen.hpp`
|
||||
* [gen_unreal_engine](./gen_unreal_engine/): Unreal Engine thirdparty code generation.
|
||||
|
||||
## Notes
|
||||
|
||||
**On Partial Hiatus: Life has got me tackling other issues..**
|
||||
I will be passively updating the library with bug fixes and minor improvements as I use it for my personal projects.
|
||||
There won't be any major reworks or features to this thing for a while.
|
||||
|
||||
This project is still in development (very much an alpha state), so expect bugs and missing features.
|
||||
See [issues](https://github.com/Ed94/gencpp/issues) for a list of known bugs or todos.
|
||||
|
||||
The library can already be used to generate code just fine, but the parser is where the most work is needed. If your C++ isn't "down to earth" expect issues.
|
||||
|
||||
A `natvis` and `natstepfilter` are provided in the scripts directory (its outdated, I'll update this readme when its not).
|
||||
A `natvis` and `natstepfilter` are provided in the scripts directory (its outdated, I'll update this readme when its not).
|
||||
*Minor update: I've been using [RAD Debugger](https://github.com/EpicGamesExt/raddebugger) with this and the code structures should be easy to debug even without natvis.*
|
||||
|
||||
## Usage
|
||||
|
||||
A metaprogram is built to generate files before the main program is built. We'll term runtime for this program as `GEN_TIME`. The metaprogram's core implementation are within `gen.hpp` and `gen.cpp` in the project directory.
|
||||
|
||||
`gen.cpp` \`s `main()` is defined as `gen_main()` which the user will have to define once for their program. There they will dictate everything that should be generated.
|
||||
`gen.cpp` \`s `main()` is defined as `gen_main()` which the user will have to define once for their program. There they may reflect and/or generate code.
|
||||
|
||||
In order to keep the locality of this code within the same files the following pattern may be used (although this pattern isn't the best to use):
|
||||
|
||||
@ -98,8 +95,8 @@ Validation through ast construction.
|
||||
Code header = parse_struct( code(
|
||||
struct ArrayHeader
|
||||
{
|
||||
usize Num;
|
||||
usize Capacity;
|
||||
usize Num;
|
||||
usize Capacity;
|
||||
allocator Allocator;
|
||||
};
|
||||
));
|
||||
@ -114,8 +111,8 @@ No validation, just glorified text injection.
|
||||
Code header = code_str(
|
||||
struct ArrayHeader
|
||||
{
|
||||
usize Num;
|
||||
usize Capacity;
|
||||
usize Num;
|
||||
usize Capacity;
|
||||
allocator Allocator;
|
||||
};
|
||||
);
|
||||
@ -124,15 +121,15 @@ Code header = code_str(
|
||||
`name` is a helper macro for providing a string literal with its size, intended for the name parameter of functions.
|
||||
`code` is a helper macro for providing a string literal with its size, but intended for code string parameters.
|
||||
`args` is a helper macro for providing the number of arguments to varadic constructors.
|
||||
`code_str` is a helper macro for writting `untyped_str( code( <content> ))`
|
||||
`code_str` is a helper macro for writing `untyped_str( code( <content> ))`
|
||||
|
||||
All three constrcuton interfaces will generate the following C code:
|
||||
All three construction interfaces will generate the following C code:
|
||||
|
||||
```cpp
|
||||
struct ArrayHeader
|
||||
{
|
||||
usize Num;
|
||||
usize Capacity;
|
||||
usize Num;
|
||||
usize Capacity;
|
||||
allocator Allocator;
|
||||
};
|
||||
```
|
||||
|
@ -13,11 +13,11 @@ Standard formats:
|
||||
* **base**: Files are in granular pieces separated into four directories:
|
||||
* **dependencies**: Originally from the c-zpl library and modified thereafter.
|
||||
* **components**: The essential definitions of the library.
|
||||
* **helpers**: Contains helper functionality used by base and other libraries to regenerate or generate the other library formats.
|
||||
* **helpers**: Contains helper functionality used by base and the variant library generators.
|
||||
* `base_codegen.hpp`: Helps with self-hosted code generation of enums, and operator overload inlines of the code types.
|
||||
* `<push/pop>.<name>.inline.<hpp>`: macros that are meant to be injected at specific locations of the library.
|
||||
* `<push/pop>.<name>.inline.<hpp>`: macros that are meant to be injected at specific locations of the library file/s.
|
||||
* `misc.hpp`: Misc functionality used by the library generation metaprograms.
|
||||
* `undef.macros.h`: Undefines all macros from library that original were intended to leak into user code.
|
||||
* `undef.macros.h`: Undefines all macros from library.
|
||||
* **auxillary**: Non-essential tooling:
|
||||
* `Builder`: Similar conceptually to Jai programming language's *builder*, just opens a file and prepares a string buffer to serialize code into (`builder_print`, `builder_print_fmt`). Then write & close the file when completed (`builder_write`).
|
||||
* **`Scanner`**: Interface to load up `Code` from files two basic funcctions are currently provided.
|
||||
@ -127,6 +127,14 @@ There are ***five*** header files which are automatically generated using [base_
|
||||
|
||||
[`misc.hpp`](./helpers/misc.hpp): Has shared functions used by the library generation meta-programs throughout this codebase.
|
||||
|
||||
If using the library's provided build scripts:
|
||||
|
||||
```ps1
|
||||
.\build.ps1 <compiler> <debug or omit> base
|
||||
```
|
||||
|
||||
Will refresh those files.
|
||||
|
||||
## On multi-threading
|
||||
|
||||
Currently unsupported. I want the library to be *stable* and *correct*, with the addition of exhausting all basic single-threaded optimizations before I consider multi-threading.
|
||||
@ -146,7 +154,7 @@ Names or Content fields are interned strings and thus showed be cached using `ge
|
||||
|
||||
`def_operator` is the most sophisticated upfront constructor as it has multiple permutations of definitions that could be created that are not trivial to determine if valid.
|
||||
|
||||
The parser is documented under [`docs/Parsing.md`](../docs/Parsing.md) and [`docs/Parser_Algo.md`](../docs/Parser_Algo.md). Extending it is more serious.
|
||||
The parser is documented under [`docs/Parsing.md`](../docs/Parsing.md) and [`docs/Parser_Algo.md`](../docs/Parser_Algo.md).
|
||||
|
||||
## A note on compilation and runtime generation speed
|
||||
|
||||
|
@ -1161,27 +1161,20 @@ bool code_is_equal( Code self, Code other )
|
||||
|
||||
bool code_validate_body(Code self)
|
||||
{
|
||||
#define CheckEntries( Unallowed_Types ) \
|
||||
do \
|
||||
{ \
|
||||
CodeBody body = cast(CodeBody, self); \
|
||||
for ( Code code_entry = begin_CodeBody(body); code_entry != end_CodeBody(body); next_CodeBody(body, code_entry) ) \
|
||||
{ \
|
||||
switch ( code_entry->Type ) \
|
||||
{ \
|
||||
Unallowed_Types \
|
||||
log_failure( "AST::validate_body: Invalid entry in body %SC", code_debug_str(code_entry) ); \
|
||||
return false; \
|
||||
} \
|
||||
} \
|
||||
} \
|
||||
while (0);
|
||||
|
||||
switch ( self->Type )
|
||||
{
|
||||
case CT_Class_Body:
|
||||
{
|
||||
CheckEntries( GEN_AST_BODY_CLASS_UNALLOWED_TYPES );
|
||||
CodeBody body = cast(CodeBody, self);
|
||||
for (Code code_entry = begin_CodeBody(body); code_entry != end_CodeBody(body); next_CodeBody(body, code_entry)) switch (code_entry->Type)
|
||||
{
|
||||
GEN_AST_BODY_CLASS_UNALLOWED_TYPES:
|
||||
log_failure("AST::validate_body: Invalid entry in body %SC", code_debug_str(code_entry));
|
||||
return false;
|
||||
|
||||
default:
|
||||
continue;
|
||||
}
|
||||
}
|
||||
break;
|
||||
case CT_Enum_Body:
|
||||
@ -1199,57 +1192,77 @@ bool code_validate_body(Code self)
|
||||
break;
|
||||
case CT_Export_Body:
|
||||
{
|
||||
CheckEntries( GEN_AST_BODY_CLASS_UNALLOWED_TYPES );
|
||||
CodeBody body = cast(CodeBody, self);
|
||||
for (Code code_entry = begin_CodeBody(body); code_entry != end_CodeBody(body); next_CodeBody(body, code_entry)) switch (code_entry->Type)
|
||||
{
|
||||
GEN_AST_BODY_EXPORT_UNALLOWED_TYPES:
|
||||
log_failure("AST::validate_body: Invalid entry in body %SC", code_debug_str(code_entry));
|
||||
return false;
|
||||
|
||||
default:
|
||||
continue;
|
||||
}
|
||||
}
|
||||
break;
|
||||
case CT_Extern_Linkage:
|
||||
{
|
||||
CheckEntries( GEN_AST_BODY_EXTERN_LINKAGE_UNALLOWED_TYPES );
|
||||
CodeBody body = cast(CodeBody, self);
|
||||
for (Code code_entry = begin_CodeBody(body); code_entry != end_CodeBody(body); next_CodeBody(body, code_entry)) switch (code_entry->Type)
|
||||
{
|
||||
GEN_AST_BODY_EXTERN_LINKAGE_UNALLOWED_TYPES:
|
||||
log_failure("AST::validate_body: Invalid entry in body %SC", code_debug_str(code_entry));
|
||||
return false;
|
||||
|
||||
default:
|
||||
continue;
|
||||
}
|
||||
}
|
||||
break;
|
||||
case CT_Function_Body:
|
||||
{
|
||||
CheckEntries( GEN_AST_BODY_FUNCTION_UNALLOWED_TYPES );
|
||||
CodeBody body = cast(CodeBody, self);
|
||||
for (Code code_entry = begin_CodeBody(body); code_entry != end_CodeBody(body); next_CodeBody(body, code_entry)) switch (code_entry->Type)
|
||||
{
|
||||
GEN_AST_BODY_FUNCTION_UNALLOWED_TYPES:
|
||||
log_failure("AST::validate_body: Invalid entry in body %SC", code_debug_str(code_entry));
|
||||
return false;
|
||||
|
||||
default:
|
||||
continue;
|
||||
}
|
||||
}
|
||||
break;
|
||||
case CT_Global_Body:
|
||||
{
|
||||
CodeBody body = cast(CodeBody, self);
|
||||
for ( Code entry = begin_CodeBody(body); entry != end_CodeBody(body); next_CodeBody(body, entry) )
|
||||
for ( Code entry = begin_CodeBody(body); entry != end_CodeBody(body); next_CodeBody(body, entry) )switch (entry->Type)
|
||||
{
|
||||
switch (entry->Type)
|
||||
{
|
||||
case CT_Access_Public:
|
||||
case CT_Access_Protected:
|
||||
case CT_Access_Private:
|
||||
case CT_PlatformAttributes:
|
||||
case CT_Class_Body:
|
||||
case CT_Enum_Body:
|
||||
case CT_Execution:
|
||||
case CT_Friend:
|
||||
case CT_Function_Body:
|
||||
case CT_Global_Body:
|
||||
case CT_Namespace_Body:
|
||||
case CT_Operator_Member:
|
||||
case CT_Operator_Member_Fwd:
|
||||
case CT_Parameters:
|
||||
case CT_Specifiers:
|
||||
case CT_Struct_Body:
|
||||
case CT_Typename:
|
||||
log_failure("AST::validate_body: Invalid entry in body %SC", code_debug_str(entry));
|
||||
return false;
|
||||
}
|
||||
GEN_AST_BODY_GLOBAL_UNALLOWED_TYPES:
|
||||
log_failure("AST::validate_body: Invalid entry in body %SC", code_debug_str(entry));
|
||||
return false;
|
||||
}
|
||||
}
|
||||
break;
|
||||
case CT_Namespace_Body:
|
||||
{
|
||||
CheckEntries( GEN_AST_BODY_NAMESPACE_UNALLOWED_TYPES );
|
||||
CodeBody body = cast(CodeBody, self);
|
||||
for ( Code entry = begin_CodeBody(body); entry != end_CodeBody(body); next_CodeBody(body, entry) ) switch (entry->Type)
|
||||
{
|
||||
GEN_AST_BODY_NAMESPACE_UNALLOWED_TYPES:
|
||||
log_failure("AST::validate_body: Invalid entry in body %SC", code_debug_str(entry));
|
||||
return false;
|
||||
}
|
||||
}
|
||||
break;
|
||||
case CT_Struct_Body:
|
||||
{
|
||||
CheckEntries( GEN_AST_BODY_STRUCT_UNALLOWED_TYPES );
|
||||
CodeBody body = cast(CodeBody, self);
|
||||
for ( Code entry = begin_CodeBody(body); entry != end_CodeBody(body); next_CodeBody(body, entry) ) switch (entry->Type)
|
||||
{
|
||||
GEN_AST_BODY_STRUCT_UNALLOWED_TYPES:
|
||||
log_failure("AST::validate_body: Invalid entry in body %SC", code_debug_str(entry));
|
||||
return false;
|
||||
}
|
||||
}
|
||||
break;
|
||||
case CT_Union_Body:
|
||||
@ -1272,6 +1285,4 @@ bool code_validate_body(Code self)
|
||||
}
|
||||
|
||||
return false;
|
||||
|
||||
#undef CheckEntries
|
||||
}
|
||||
|
@ -1,4 +1,6 @@
|
||||
# define GEN_AST_BODY_CLASS_UNALLOWED_TYPES \
|
||||
// These macros are used in the swtich cases are used within ast.cpp, inteface.upfront.cpp, parser.cpp
|
||||
|
||||
# define GEN_AST_BODY_CLASS_UNALLOWED_TYPES \
|
||||
case CT_PlatformAttributes: \
|
||||
case CT_Class_Body: \
|
||||
case CT_Enum_Body: \
|
||||
@ -13,7 +15,7 @@
|
||||
case CT_Parameters: \
|
||||
case CT_Specifiers: \
|
||||
case CT_Struct_Body: \
|
||||
case CT_Typename:
|
||||
case CT_Typename
|
||||
# define GEN_AST_BODY_STRUCT_UNALLOWED_TYPES GEN_AST_BODY_CLASS_UNALLOWED_TYPES
|
||||
|
||||
# define GEN_AST_BODY_FUNCTION_UNALLOWED_TYPES \
|
||||
@ -37,7 +39,7 @@
|
||||
case CT_Parameters: \
|
||||
case CT_Specifiers: \
|
||||
case CT_Struct_Body: \
|
||||
case CT_Typename:
|
||||
case CT_Typename
|
||||
|
||||
# define GEN_AST_BODY_GLOBAL_UNALLOWED_TYPES \
|
||||
case CT_Access_Public: \
|
||||
@ -55,7 +57,7 @@
|
||||
case CT_Parameters: \
|
||||
case CT_Specifiers: \
|
||||
case CT_Struct_Body: \
|
||||
case CT_Typename:
|
||||
case CT_Typename
|
||||
# define GEN_AST_BODY_EXPORT_UNALLOWED_TYPES GEN_AST_BODY_GLOBAL_UNALLOWED_TYPES
|
||||
# define GEN_AST_BODY_EXTERN_LINKAGE_UNALLOWED_TYPES GEN_AST_BODY_GLOBAL_UNALLOWED_TYPES
|
||||
|
||||
@ -75,4 +77,4 @@
|
||||
case CT_Parameters: \
|
||||
case CT_Specifiers: \
|
||||
case CT_Struct_Body: \
|
||||
case CT_Typename:
|
||||
case CT_Typename
|
||||
|
@ -145,43 +145,34 @@ void define_constants()
|
||||
preprocess_endif->Type = CT_Preprocess_EndIf;
|
||||
code_set_global((Code)preprocess_endif);
|
||||
|
||||
# define def_constant_code_type( Type_ ) \
|
||||
do \
|
||||
{ \
|
||||
StrC name_str = name(Type_); \
|
||||
t_##Type_ = def_type( name_str ); \
|
||||
code_set_global( cast(Code, t_##Type_)); \
|
||||
} while(0)
|
||||
|
||||
def_constant_code_type( auto );
|
||||
def_constant_code_type( void );
|
||||
def_constant_code_type( int );
|
||||
def_constant_code_type( bool );
|
||||
def_constant_code_type( char );
|
||||
def_constant_code_type( wchar_t );
|
||||
def_constant_code_type( class );
|
||||
def_constant_code_type( typename );
|
||||
StrC auto_str = txt("auto"); t_auto = def_type( auto_str ); code_set_global( t_auto );
|
||||
StrC void_str = txt("void"); t_void = def_type( void_str ); code_set_global( t_void );
|
||||
StrC int_str = txt("int"); t_int = def_type( int_str ); code_set_global( t_int );
|
||||
StrC bool_str = txt("bool"); t_bool = def_type( bool_str ); code_set_global( t_bool );
|
||||
StrC char_str = txt("char"); t_char = def_type( char_str ); code_set_global( t_char );
|
||||
StrC wchar_str = txt("wchar_t"); t_wchar_t = def_type( wchar_str ); code_set_global( t_wchar_t );
|
||||
StrC class_str = txt("class"); t_class = def_type( class_str ); code_set_global( t_class );
|
||||
StrC typename_str = txt("typename"); t_typename = def_type( typename_str ); code_set_global( t_typename );
|
||||
|
||||
#ifdef GEN_DEFINE_LIBRARY_CODE_CONSTANTS
|
||||
t_b32 = def_type( name(b32) );
|
||||
t_b32 = def_type( name(b32) ); code_set_global( t_b32 );
|
||||
|
||||
def_constant_code_type( s8 );
|
||||
def_constant_code_type( s16 );
|
||||
def_constant_code_type( s32 );
|
||||
def_constant_code_type( s64 );
|
||||
StrC s8_str = txt("s8"); t_s8 = def_type( s8_str ); code_set_global( t_s8 );
|
||||
StrC s16_str = txt("s16"); t_s16 = def_type( s16_str ); code_set_global( t_s16 );
|
||||
StrC s32_str = txt("s32"); t_s32 = def_type( s32_str ); code_set_global( t_s32 );
|
||||
StrC s64_str = txt("s64"); t_s64 = def_type( s64_str ); code_set_global( t_s64 );
|
||||
|
||||
def_constant_code_type( u8 );
|
||||
def_constant_code_type( u16 );
|
||||
def_constant_code_type( u32 );
|
||||
def_constant_code_type( u64 );
|
||||
StrC u8_str = txt("u8"); t_u8 = def_type( u8_str ); code_set_global( t_u8 );
|
||||
StrC u16_str = txt("u16"); t_u16 = def_type( u16_str ); code_set_global( t_u16 );
|
||||
StrC u32_str = txt("u32"); t_u32 = def_type( u32_str ); code_set_global( t_u32 );
|
||||
StrC u64_str = txt("u64"); t_u64 = def_type( u64_str ); code_set_global( t_u64 );
|
||||
|
||||
def_constant_code_type( ssize );
|
||||
def_constant_code_type( usize );
|
||||
StrC ssize_str = txt("ssize"); t_ssize = def_type( ssize_str ); code_set_global( t_ssize );
|
||||
StrC usize_str = txt("usize"); t_usize = def_type( usize_str ); code_set_global( t_usize );
|
||||
|
||||
def_constant_code_type( f32 );
|
||||
def_constant_code_type( f64 );
|
||||
StrC f32_str = txt("f32"); t_f32 = def_type( f32_str ); code_set_global( t_f32 );
|
||||
StrC f64_str = txt("f64"); t_f64 = def_type( f64_str ); code_set_global( t_f64 );
|
||||
#endif
|
||||
# undef def_constant_code_type
|
||||
|
||||
spec_const = def_specifier( Spec_Const); code_set_global( cast(Code, spec_const ));
|
||||
spec_consteval = def_specifier( Spec_Consteval); code_set_global( cast(Code, spec_consteval ));;
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -175,48 +175,6 @@ global FixedArena_256KB Lexer_defines_map_arena;
|
||||
global StringTable Lexer_defines;
|
||||
global Array(Token) Lexer_Tokens;
|
||||
|
||||
#define current ( * ctx->scanner )
|
||||
|
||||
#define move_forward() \
|
||||
{ \
|
||||
if ( current == '\n' ) \
|
||||
{ \
|
||||
ctx->line++; \
|
||||
ctx->column = 1; \
|
||||
} \
|
||||
else \
|
||||
{ \
|
||||
ctx->column++; \
|
||||
} \
|
||||
ctx->left--; \
|
||||
ctx->scanner++; \
|
||||
}
|
||||
|
||||
#define skip_whitespace() \
|
||||
while ( ctx->left && char_is_space( current ) ) \
|
||||
{ \
|
||||
move_forward(); \
|
||||
}
|
||||
|
||||
#define end_line() \
|
||||
do \
|
||||
{ \
|
||||
while ( ctx->left && current == ' ' ) \
|
||||
{ \
|
||||
move_forward(); \
|
||||
} \
|
||||
if ( ctx->left && current == '\r' ) \
|
||||
{ \
|
||||
move_forward(); \
|
||||
move_forward(); \
|
||||
} \
|
||||
else if ( ctx->left && current == '\n' ) \
|
||||
{ \
|
||||
move_forward(); \
|
||||
} \
|
||||
} \
|
||||
while (0)
|
||||
|
||||
enum
|
||||
{
|
||||
Lex_Continue,
|
||||
@ -234,6 +192,44 @@ struct LexContext
|
||||
Token token;
|
||||
};
|
||||
|
||||
forceinline
|
||||
void lexer_move_forward( LexContext* ctx )
|
||||
{
|
||||
if ( * ctx->scanner == '\n' ) {
|
||||
ctx->line += 1;
|
||||
ctx->column = 1;
|
||||
}
|
||||
else {
|
||||
++ ctx->column;
|
||||
}
|
||||
-- ctx->left;
|
||||
++ ctx->scanner;
|
||||
}
|
||||
#define move_forward() lexer_move_forward(ctx)
|
||||
|
||||
forceinline
|
||||
void lexer_skip_whitespace( LexContext* ctx )
|
||||
{
|
||||
while ( ctx->left && char_is_space( * ctx->scanner ) )
|
||||
move_forward();
|
||||
}
|
||||
#define skip_whitespace() lexer_skip_whitespace(ctx)
|
||||
|
||||
forceinline
|
||||
void lexer_end_line( LexContext* ctx )
|
||||
{
|
||||
while ( ctx->left && (* ctx->scanner) == ' ' )
|
||||
move_forward();
|
||||
|
||||
if ( ctx->left && (* ctx->scanner) == '\r' ) {
|
||||
move_forward();
|
||||
move_forward();
|
||||
}
|
||||
else if ( ctx->left && (* ctx->scanner) == '\n' )
|
||||
move_forward();
|
||||
}
|
||||
#define end_line() lexer_end_line(ctx)
|
||||
|
||||
forceinline
|
||||
s32 lex_preprocessor_directive( LexContext* ctx )
|
||||
{
|
||||
@ -245,7 +241,7 @@ s32 lex_preprocessor_directive( LexContext* ctx )
|
||||
skip_whitespace();
|
||||
|
||||
ctx->token.Text = ctx->scanner;
|
||||
while (ctx->left && ! char_is_space(current) )
|
||||
while (ctx->left && ! char_is_space((* ctx->scanner)) )
|
||||
{
|
||||
move_forward();
|
||||
ctx->token.Length++;
|
||||
@ -263,24 +259,24 @@ s32 lex_preprocessor_directive( LexContext* ctx )
|
||||
s32 within_char = false;
|
||||
while ( ctx->left )
|
||||
{
|
||||
if ( current == '"' && ! within_char )
|
||||
if ( * ctx->scanner == '"' && ! within_char )
|
||||
within_string ^= true;
|
||||
|
||||
if ( current == '\'' && ! within_string )
|
||||
if ( * ctx->scanner == '\'' && ! within_string )
|
||||
within_char ^= true;
|
||||
|
||||
if ( current == '\\' && ! within_string && ! within_char )
|
||||
if ( * ctx->scanner == '\\' && ! within_string && ! within_char )
|
||||
{
|
||||
move_forward();
|
||||
ctx->token.Length++;
|
||||
|
||||
if ( current == '\r' )
|
||||
if ( (* ctx->scanner) == '\r' )
|
||||
{
|
||||
move_forward();
|
||||
ctx->token.Length++;
|
||||
}
|
||||
|
||||
if ( current == '\n' )
|
||||
if ( (* ctx->scanner) == '\n' )
|
||||
{
|
||||
move_forward();
|
||||
ctx->token.Length++;
|
||||
@ -290,19 +286,19 @@ s32 lex_preprocessor_directive( LexContext* ctx )
|
||||
{
|
||||
log_failure( "gen::Parser::lex: Invalid escape sequence '\\%c' (%d, %d)"
|
||||
" in preprocessor directive (%d, %d)\n%.100s"
|
||||
, current, ctx->line, ctx->column
|
||||
, (* ctx->scanner), ctx->line, ctx->column
|
||||
, ctx->token.Line, ctx->token.Column, ctx->token.Text );
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if ( current == '\r' )
|
||||
if ( (* ctx->scanner) == '\r' )
|
||||
{
|
||||
move_forward();
|
||||
ctx->token.Length++;
|
||||
}
|
||||
|
||||
if ( current == '\n' )
|
||||
if ( (* ctx->scanner) == '\n' )
|
||||
{
|
||||
move_forward();
|
||||
ctx->token.Length++;
|
||||
@ -343,13 +339,13 @@ s32 lex_preprocessor_directive( LexContext* ctx )
|
||||
name.Length = 1;
|
||||
move_forward();
|
||||
|
||||
while ( ctx->left && ( char_is_alphanumeric(current) || current == '_' ) )
|
||||
while ( ctx->left && ( char_is_alphanumeric((* ctx->scanner)) || (* ctx->scanner) == '_' ) )
|
||||
{
|
||||
move_forward();
|
||||
name.Length++;
|
||||
}
|
||||
|
||||
if ( ctx->left && current == '(' )
|
||||
if ( ctx->left && (* ctx->scanner) == '(' )
|
||||
{
|
||||
move_forward();
|
||||
name.Length++;
|
||||
@ -367,12 +363,12 @@ s32 lex_preprocessor_directive( LexContext* ctx )
|
||||
{
|
||||
preprocess_content.Type = Tok_String;
|
||||
|
||||
if ( current != '"' && current != '<' )
|
||||
if ( (* ctx->scanner) != '"' && (* ctx->scanner) != '<' )
|
||||
{
|
||||
String directive_str = string_fmt_buf( GlobalAllocator, "%.*s", min( 80, ctx->left + preprocess_content.Length ), ctx->token.Text );
|
||||
|
||||
log_failure( "gen::Parser::lex: Expected '\"' or '<' after #include, not '%c' (%d, %d)\n%s"
|
||||
, current
|
||||
, (* ctx->scanner)
|
||||
, preprocess_content.Line
|
||||
, preprocess_content.Column
|
||||
, (char*) directive_str
|
||||
@ -382,7 +378,7 @@ s32 lex_preprocessor_directive( LexContext* ctx )
|
||||
move_forward();
|
||||
preprocess_content.Length++;
|
||||
|
||||
while ( ctx->left && current != '"' && current != '>' )
|
||||
while ( ctx->left && (* ctx->scanner) != '"' && (* ctx->scanner) != '>' )
|
||||
{
|
||||
move_forward();
|
||||
preprocess_content.Length++;
|
||||
@ -391,12 +387,12 @@ s32 lex_preprocessor_directive( LexContext* ctx )
|
||||
move_forward();
|
||||
preprocess_content.Length++;
|
||||
|
||||
if ( current == '\r' && ctx->scanner[1] == '\n' )
|
||||
if ( (* ctx->scanner) == '\r' && ctx->scanner[1] == '\n' )
|
||||
{
|
||||
move_forward();
|
||||
move_forward();
|
||||
}
|
||||
else if ( current == '\n' )
|
||||
else if ( (* ctx->scanner) == '\n' )
|
||||
{
|
||||
move_forward();
|
||||
}
|
||||
@ -411,24 +407,24 @@ s32 lex_preprocessor_directive( LexContext* ctx )
|
||||
// SkipWhitespace();
|
||||
while ( ctx->left )
|
||||
{
|
||||
if ( current == '"' && ! within_char )
|
||||
if ( (* ctx->scanner) == '"' && ! within_char )
|
||||
within_string ^= true;
|
||||
|
||||
if ( current == '\'' && ! within_string )
|
||||
if ( (* ctx->scanner) == '\'' && ! within_string )
|
||||
within_char ^= true;
|
||||
|
||||
if ( current == '\\' && ! within_string && ! within_char )
|
||||
if ( (* ctx->scanner) == '\\' && ! within_string && ! within_char )
|
||||
{
|
||||
move_forward();
|
||||
preprocess_content.Length++;
|
||||
|
||||
if ( current == '\r' )
|
||||
if ( (* ctx->scanner) == '\r' )
|
||||
{
|
||||
move_forward();
|
||||
preprocess_content.Length++;
|
||||
}
|
||||
|
||||
if ( current == '\n' )
|
||||
if ( (* ctx->scanner) == '\n' )
|
||||
{
|
||||
move_forward();
|
||||
preprocess_content.Length++;
|
||||
@ -441,20 +437,20 @@ s32 lex_preprocessor_directive( LexContext* ctx )
|
||||
|
||||
log_failure( "gen::Parser::lex: Invalid escape sequence '\\%c' (%d, %d)"
|
||||
" in preprocessor directive '%s' (%d, %d)\n%s"
|
||||
, current, ctx->line, ctx->column
|
||||
, (* ctx->scanner), ctx->line, ctx->column
|
||||
, directive_str, preprocess_content.Line, preprocess_content.Column
|
||||
, content_str );
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if ( current == '\r' )
|
||||
if ( (* ctx->scanner) == '\r' )
|
||||
{
|
||||
break;
|
||||
//move_forward();
|
||||
}
|
||||
|
||||
if ( current == '\n' )
|
||||
if ( (* ctx->scanner) == '\n' )
|
||||
{
|
||||
//move_forward();
|
||||
break;
|
||||
@ -493,7 +489,7 @@ void lex_found_token( LexContext* ctx )
|
||||
{
|
||||
skip_whitespace();
|
||||
|
||||
if ( current != '"' )
|
||||
if ( (* ctx->scanner) != '"' )
|
||||
{
|
||||
type = Tok_Spec_Extern;
|
||||
ctx->token.Flags |= TF_Specifier;
|
||||
@ -523,7 +519,7 @@ void lex_found_token( LexContext* ctx )
|
||||
}
|
||||
|
||||
u64 key = 0;
|
||||
if ( current == '(')
|
||||
if ( (* ctx->scanner) == '(')
|
||||
key = crc32( ctx->token.Text, ctx->token.Length + 1 );
|
||||
else
|
||||
key = crc32( ctx->token.Text, ctx->token.Length );
|
||||
@ -534,18 +530,18 @@ void lex_found_token( LexContext* ctx )
|
||||
ctx->token.Type = Tok_Preprocess_Macro;
|
||||
|
||||
// Want to ignore any arguments the define may have as they can be execution expressions.
|
||||
if ( ctx->left && current == '(' )
|
||||
if ( ctx->left && (* ctx->scanner) == '(' )
|
||||
{
|
||||
move_forward();
|
||||
ctx->token.Length++;
|
||||
|
||||
s32 level = 0;
|
||||
while ( ctx->left && (current != ')' || level > 0) )
|
||||
while ( ctx->left && ((* ctx->scanner) != ')' || level > 0) )
|
||||
{
|
||||
if ( current == '(' )
|
||||
if ( (* ctx->scanner) == '(' )
|
||||
level++;
|
||||
|
||||
else if ( current == ')' && level > 0 )
|
||||
else if ( (* ctx->scanner) == ')' && level > 0 )
|
||||
level--;
|
||||
|
||||
move_forward();
|
||||
@ -556,12 +552,12 @@ void lex_found_token( LexContext* ctx )
|
||||
ctx->token.Length++;
|
||||
}
|
||||
|
||||
//if ( current == '\r' && ctx->scanner[1] == '\n' )
|
||||
//if ( (* ctx->scanner) == '\r' && ctx->scanner[1] == '\n' )
|
||||
//{
|
||||
// move_forward();
|
||||
// ctx->token.Length++;
|
||||
//}
|
||||
//else if ( current == '\n' )
|
||||
//else if ( (* ctx->scanner) == '\n' )
|
||||
//{
|
||||
// move_forward();
|
||||
// ctx->token.Length++;
|
||||
@ -637,13 +633,13 @@ TokArray lex( StrC content )
|
||||
|
||||
if ( c.column == 1 )
|
||||
{
|
||||
if ( current == '\r')
|
||||
if ( (* ctx->scanner) == '\r')
|
||||
{
|
||||
move_forward();
|
||||
c.token.Length = 1;
|
||||
}
|
||||
|
||||
if ( current == '\n' )
|
||||
if ( (* ctx->scanner) == '\n' )
|
||||
{
|
||||
move_forward();
|
||||
|
||||
@ -661,12 +657,8 @@ TokArray lex( StrC content )
|
||||
if ( c.left <= 0 )
|
||||
break;
|
||||
|
||||
switch ( current )
|
||||
switch ( (* ctx->scanner) )
|
||||
{
|
||||
if (array_back(Lexer_Tokens)->Length > 100 ) {
|
||||
__debugbreak();
|
||||
}
|
||||
|
||||
case '#':
|
||||
{
|
||||
s32 result = lex_preprocessor_directive( ctx );
|
||||
@ -681,13 +673,13 @@ TokArray lex( StrC content )
|
||||
Token thanks_c = { c.scanner, 0, Tok_Invalid, c.line, c.column, TF_Null };
|
||||
c.token = thanks_c;
|
||||
}
|
||||
if ( current == '\r')
|
||||
if ( (* ctx->scanner) == '\r')
|
||||
{
|
||||
move_forward();
|
||||
c.token.Length = 1;
|
||||
}
|
||||
|
||||
if ( current == '\n' )
|
||||
if ( (* ctx->scanner) == '\n' )
|
||||
{
|
||||
c.token.Type = Tok_NewLine;
|
||||
c.token.Length++;
|
||||
@ -696,7 +688,6 @@ TokArray lex( StrC content )
|
||||
array_append( Lexer_Tokens, c.token );
|
||||
}
|
||||
}
|
||||
|
||||
continue;
|
||||
}
|
||||
|
||||
@ -718,10 +709,10 @@ TokArray lex( StrC content )
|
||||
move_forward();
|
||||
}
|
||||
|
||||
if ( current == '.' )
|
||||
if ( (* ctx->scanner) == '.' )
|
||||
{
|
||||
move_forward();
|
||||
if( current == '.' )
|
||||
if( (* ctx->scanner) == '.' )
|
||||
{
|
||||
c.token.Length = 3;
|
||||
c.token.Type = Tok_Varadic_Argument;
|
||||
@ -732,7 +723,7 @@ TokArray lex( StrC content )
|
||||
{
|
||||
String context_str = string_fmt_buf( GlobalAllocator, "%s", c.scanner, min( 100, c.left ) );
|
||||
|
||||
log_failure( "gen::lex: invalid varadic argument, expected '...' got '..%c' (%d, %d)\n%s", current, c.line, c.column, context_str );
|
||||
log_failure( "gen::lex: invalid varadic argument, expected '...' got '..%c' (%d, %d)\n%s", (* ctx->scanner), c.line, c.column, context_str );
|
||||
}
|
||||
}
|
||||
|
||||
@ -749,7 +740,7 @@ TokArray lex( StrC content )
|
||||
if (c.left)
|
||||
move_forward();
|
||||
|
||||
if ( current == '&' ) // &&
|
||||
if ( (* ctx->scanner) == '&' ) // &&
|
||||
{
|
||||
c.token.Length = 2;
|
||||
c.token.Type = Tok_Ampersand_DBL;
|
||||
@ -771,7 +762,7 @@ TokArray lex( StrC content )
|
||||
if (c.left)
|
||||
move_forward();
|
||||
|
||||
if ( current == ':' )
|
||||
if ( (* ctx->scanner) == ':' )
|
||||
{
|
||||
move_forward();
|
||||
c.token.Type = Tok_Access_StaticSymbol;
|
||||
@ -811,7 +802,7 @@ TokArray lex( StrC content )
|
||||
{
|
||||
move_forward();
|
||||
|
||||
if ( current == ']' )
|
||||
if ( (* ctx->scanner) == ']' )
|
||||
{
|
||||
c.token.Length = 2;
|
||||
c.token.Type = Tok_Operator;
|
||||
@ -859,19 +850,19 @@ TokArray lex( StrC content )
|
||||
|
||||
move_forward();
|
||||
|
||||
if ( c.left && current == '\\' )
|
||||
if ( c.left && (* ctx->scanner) == '\\' )
|
||||
{
|
||||
move_forward();
|
||||
c.token.Length++;
|
||||
|
||||
if ( current == '\'' )
|
||||
if ( (* ctx->scanner) == '\'' )
|
||||
{
|
||||
move_forward();
|
||||
c.token.Length++;
|
||||
}
|
||||
}
|
||||
|
||||
while ( c.left && current != '\'' )
|
||||
while ( c.left && (* ctx->scanner) != '\'' )
|
||||
{
|
||||
move_forward();
|
||||
c.token.Length++;
|
||||
@ -906,7 +897,7 @@ TokArray lex( StrC content )
|
||||
if (c.left)
|
||||
move_forward();
|
||||
|
||||
if ( current == '=' )
|
||||
if ( (* ctx->scanner) == '=' )
|
||||
{
|
||||
c.token.Length++;
|
||||
c.token.Flags |= TF_Assign;
|
||||
@ -941,13 +932,13 @@ TokArray lex( StrC content )
|
||||
move_forward();
|
||||
while ( c.left )
|
||||
{
|
||||
if ( current == '"' )
|
||||
if ( (* ctx->scanner) == '"' )
|
||||
{
|
||||
move_forward();
|
||||
break;
|
||||
}
|
||||
|
||||
if ( current == '\\' )
|
||||
if ( (* ctx->scanner) == '\\' )
|
||||
{
|
||||
move_forward();
|
||||
c.token.Length++;
|
||||
@ -990,7 +981,7 @@ TokArray lex( StrC content )
|
||||
if (c.left)
|
||||
move_forward();
|
||||
|
||||
if ( current == '=' )
|
||||
if ( (* ctx->scanner) == '=' )
|
||||
{
|
||||
c.token.Length++;
|
||||
c.token.Flags = TF_Operator;
|
||||
@ -1045,7 +1036,7 @@ TokArray lex( StrC content )
|
||||
if (c.left)
|
||||
move_forward();
|
||||
|
||||
if ( current == '=' )
|
||||
if ( (* ctx->scanner) == '=' )
|
||||
{
|
||||
c.token.Length++;
|
||||
c.token.Flags |= TF_Assign;
|
||||
@ -1055,7 +1046,7 @@ TokArray lex( StrC content )
|
||||
if (c.left)
|
||||
move_forward();
|
||||
}
|
||||
else while ( c.left && current == *(c.scanner - 1) && c.token.Length < 3 )
|
||||
else while ( c.left && (* ctx->scanner) == *(c.scanner - 1) && c.token.Length < 3 )
|
||||
{
|
||||
c.token.Length++;
|
||||
|
||||
@ -1077,21 +1068,21 @@ TokArray lex( StrC content )
|
||||
{
|
||||
move_forward();
|
||||
|
||||
if ( current == '>' )
|
||||
if ( (* ctx->scanner) == '>' )
|
||||
{
|
||||
c.token.Length++;
|
||||
// token.Type = Tok_Access_PointerToMemberSymbol;
|
||||
c.token.Flags |= TF_AccessOperator;
|
||||
move_forward();
|
||||
|
||||
if ( current == '*' )
|
||||
if ( (* ctx->scanner) == '*' )
|
||||
{
|
||||
// token.Type = Tok_Access_PointerToMemberOfPointerSymbol;
|
||||
c.token.Length++;
|
||||
move_forward();
|
||||
}
|
||||
}
|
||||
else if ( current == '=' )
|
||||
else if ( (* ctx->scanner) == '=' )
|
||||
{
|
||||
c.token.Length++;
|
||||
// token.Type = Tok_Assign_Subtract;
|
||||
@ -1100,7 +1091,7 @@ TokArray lex( StrC content )
|
||||
if (c.left)
|
||||
move_forward();
|
||||
}
|
||||
else while ( c.left && current == *(c.scanner - 1) && c.token.Length < 3 )
|
||||
else while ( c.left && (* ctx->scanner) == *(c.scanner - 1) && c.token.Length < 3 )
|
||||
{
|
||||
c.token.Length++;
|
||||
|
||||
@ -1121,32 +1112,32 @@ TokArray lex( StrC content )
|
||||
|
||||
if ( c.left )
|
||||
{
|
||||
if ( current == '=' )
|
||||
if ( (* ctx->scanner) == '=' )
|
||||
{
|
||||
// token.Type = TokeType::Assign_Divide;
|
||||
move_forward();
|
||||
c.token.Length++;
|
||||
c.token.Flags = TF_Assign;
|
||||
}
|
||||
else if ( current == '/' )
|
||||
else if ( (* ctx->scanner) == '/' )
|
||||
{
|
||||
c.token.Type = Tok_Comment;
|
||||
c.token.Length = 2;
|
||||
c.token.Flags = TF_Null;
|
||||
move_forward();
|
||||
|
||||
while ( c.left && current != '\n' && current != '\r' )
|
||||
while ( c.left && (* ctx->scanner) != '\n' && (* ctx->scanner) != '\r' )
|
||||
{
|
||||
move_forward();
|
||||
c.token.Length++;
|
||||
}
|
||||
|
||||
if ( current == '\r' )
|
||||
if ( (* ctx->scanner) == '\r' )
|
||||
{
|
||||
move_forward();
|
||||
c.token.Length++;
|
||||
}
|
||||
if ( current == '\n' )
|
||||
if ( (* ctx->scanner) == '\n' )
|
||||
{
|
||||
move_forward();
|
||||
c.token.Length++;
|
||||
@ -1154,14 +1145,14 @@ TokArray lex( StrC content )
|
||||
array_append( Lexer_Tokens, c.token );
|
||||
continue;
|
||||
}
|
||||
else if ( current == '*' )
|
||||
else if ( (* ctx->scanner) == '*' )
|
||||
{
|
||||
c.token.Type = Tok_Comment;
|
||||
c.token.Length = 2;
|
||||
c.token.Flags = TF_Null;
|
||||
move_forward();
|
||||
|
||||
bool star = current == '*';
|
||||
bool star = (* ctx->scanner) == '*';
|
||||
bool slash = c.scanner[1] == '/';
|
||||
bool at_end = star && slash;
|
||||
while ( c.left && ! at_end )
|
||||
@ -1169,7 +1160,7 @@ TokArray lex( StrC content )
|
||||
move_forward();
|
||||
c.token.Length++;
|
||||
|
||||
star = current == '*';
|
||||
star = (* ctx->scanner) == '*';
|
||||
slash = c.scanner[1] == '/';
|
||||
at_end = star && slash;
|
||||
}
|
||||
@ -1177,12 +1168,12 @@ TokArray lex( StrC content )
|
||||
move_forward();
|
||||
move_forward();
|
||||
|
||||
if ( current == '\r' )
|
||||
if ( (* ctx->scanner) == '\r' )
|
||||
{
|
||||
move_forward();
|
||||
c.token.Length++;
|
||||
}
|
||||
if ( current == '\n' )
|
||||
if ( (* ctx->scanner) == '\n' )
|
||||
{
|
||||
move_forward();
|
||||
c.token.Length++;
|
||||
@ -1196,13 +1187,13 @@ TokArray lex( StrC content )
|
||||
}
|
||||
}
|
||||
|
||||
if ( char_is_alpha( current ) || current == '_' )
|
||||
if ( char_is_alpha( (* ctx->scanner) ) || (* ctx->scanner) == '_' )
|
||||
{
|
||||
c.token.Text = c.scanner;
|
||||
c.token.Length = 1;
|
||||
move_forward();
|
||||
|
||||
while ( c.left && ( char_is_alphanumeric(current) || current == '_' ) )
|
||||
while ( c.left && ( char_is_alphanumeric((* ctx->scanner)) || (* ctx->scanner) == '_' ) )
|
||||
{
|
||||
move_forward();
|
||||
c.token.Length++;
|
||||
@ -1210,7 +1201,7 @@ TokArray lex( StrC content )
|
||||
|
||||
goto FoundToken;
|
||||
}
|
||||
else if ( char_is_digit(current) )
|
||||
else if ( char_is_digit((* ctx->scanner)) )
|
||||
{
|
||||
// This is a very brute force lex, no checks are done for validity of literal.
|
||||
|
||||
@ -1221,15 +1212,15 @@ TokArray lex( StrC content )
|
||||
move_forward();
|
||||
|
||||
if (c.left
|
||||
&& ( current == 'x' || current == 'X'
|
||||
|| current == 'b' || current == 'B'
|
||||
|| current == 'o' || current == 'O' )
|
||||
&& ( (* ctx->scanner) == 'x' || (* ctx->scanner) == 'X'
|
||||
|| (* ctx->scanner) == 'b' || (* ctx->scanner) == 'B'
|
||||
|| (* ctx->scanner) == 'o' || (* ctx->scanner) == 'O' )
|
||||
)
|
||||
{
|
||||
move_forward();
|
||||
c.token.Length++;
|
||||
|
||||
while ( c.left && char_is_hex_digit(current) )
|
||||
while ( c.left && char_is_hex_digit((* ctx->scanner)) )
|
||||
{
|
||||
move_forward();
|
||||
c.token.Length++;
|
||||
@ -1238,18 +1229,18 @@ TokArray lex( StrC content )
|
||||
goto FoundToken;
|
||||
}
|
||||
|
||||
while ( c.left && char_is_digit(current) )
|
||||
while ( c.left && char_is_digit((* ctx->scanner)) )
|
||||
{
|
||||
move_forward();
|
||||
c.token.Length++;
|
||||
}
|
||||
|
||||
if ( c.left && current == '.' )
|
||||
if ( c.left && (* ctx->scanner) == '.' )
|
||||
{
|
||||
move_forward();
|
||||
c.token.Length++;
|
||||
|
||||
while ( c.left && char_is_digit(current) )
|
||||
while ( c.left && char_is_digit((* ctx->scanner)) )
|
||||
{
|
||||
move_forward();
|
||||
c.token.Length++;
|
||||
@ -1257,18 +1248,18 @@ TokArray lex( StrC content )
|
||||
|
||||
// Handle number literal suffixes in a botched way
|
||||
if (c.left && (
|
||||
current == 'l' || current == 'L' || // long/long long
|
||||
current == 'u' || current == 'U' || // unsigned
|
||||
current == 'f' || current == 'F' || // float
|
||||
current == 'i' || current == 'I' || // imaginary
|
||||
current == 'z' || current == 'Z')) // complex
|
||||
(* ctx->scanner) == 'l' || (* ctx->scanner) == 'L' || // long/long long
|
||||
(* ctx->scanner) == 'u' || (* ctx->scanner) == 'U' || // unsigned
|
||||
(* ctx->scanner) == 'f' || (* ctx->scanner) == 'F' || // float
|
||||
(* ctx->scanner) == 'i' || (* ctx->scanner) == 'I' || // imaginary
|
||||
(* ctx->scanner) == 'z' || (* ctx->scanner) == 'Z')) // complex
|
||||
{
|
||||
char prev = current;
|
||||
char prev = (* ctx->scanner);
|
||||
move_forward();
|
||||
c.token.Length++;
|
||||
|
||||
// Handle 'll'/'LL' as a special case when we just processed an 'l'/'L'
|
||||
if (c.left && (prev == 'l' || prev == 'L') && (current == 'l' || current == 'L'))
|
||||
if (c.left && (prev == 'l' || prev == 'L') && ((* ctx->scanner) == 'l' || (* ctx->scanner) == 'L'))
|
||||
{
|
||||
move_forward();
|
||||
c.token.Length++;
|
||||
@ -1292,10 +1283,10 @@ TokArray lex( StrC content )
|
||||
}
|
||||
|
||||
String context_str = string_fmt_buf( GlobalAllocator, "%.*s", min( 100, c.left ), c.scanner );
|
||||
log_failure( "Failed to lex token '%c' (%d, %d)\n%s", current, c.line, c.column, context_str );
|
||||
log_failure( "Failed to lex token '%c' (%d, %d)\n%s", (* ctx->scanner), c.line, c.column, context_str );
|
||||
|
||||
// Skip to next whitespace since we can't know if anything else is valid until then.
|
||||
while ( c.left && ! char_is_space( current ) )
|
||||
while ( c.left && ! char_is_space( (* ctx->scanner) ) )
|
||||
{
|
||||
move_forward();
|
||||
}
|
||||
@ -1309,13 +1300,13 @@ TokArray lex( StrC content )
|
||||
{
|
||||
Token thanks_c = { c.scanner, 0, Tok_Invalid, c.line, c.column, TF_Null };
|
||||
c.token = thanks_c;
|
||||
if ( current == '\r')
|
||||
if ( (* ctx->scanner) == '\r')
|
||||
{
|
||||
move_forward();
|
||||
c.token.Length = 1;
|
||||
}
|
||||
|
||||
if ( current == '\n' )
|
||||
if ( (* ctx->scanner) == '\n' )
|
||||
{
|
||||
c.token.Type = Tok_NewLine;
|
||||
c.token.Length++;
|
||||
@ -1342,8 +1333,8 @@ TokArray lex( StrC content )
|
||||
TokArray result = { Lexer_Tokens, 0 };
|
||||
return result;
|
||||
}
|
||||
#undef current
|
||||
#undef move_forward
|
||||
#undef SkipWhitespace
|
||||
#undef skip_whitespace
|
||||
#undef end_line
|
||||
|
||||
GEN_NS_PARSER_END
|
||||
|
@ -32,14 +32,14 @@ void parser_push( ParseContext* ctx, StackNode* node )
|
||||
node->Prev = ctx->Scope;
|
||||
ctx->Scope = node;
|
||||
|
||||
#if 0 && Build_Debug
|
||||
#if 0 && GEN_BUILD_DEBUG
|
||||
log_fmt("\tEntering Context: %.*s\n", Scope->ProcName.Len, Scope->ProcName.Ptr );
|
||||
#endif
|
||||
}
|
||||
|
||||
void parser_pop(ParseContext* ctx)
|
||||
{
|
||||
#if 0 && Build_Debug
|
||||
#if 0 && GEN_BUILD_DEBUG
|
||||
log_fmt("\tPopping Context: %.*s\n", Scope->ProcName.Len, Scope->ProcName.Ptr );
|
||||
#endif
|
||||
ctx->Scope = ctx->Scope->Prev;
|
||||
@ -128,7 +128,7 @@ bool lex__eat(TokArray* self, TokType type )
|
||||
return false;
|
||||
}
|
||||
|
||||
#if 0 && Build_Debug
|
||||
#if 0 && GEN_BUILD_DEBUG
|
||||
log_fmt("Ate: %S\n", self->Arr[Idx].to_string() );
|
||||
#endif
|
||||
|
||||
@ -1796,8 +1796,9 @@ CodeBody parse_global_nspace( CodeType which )
|
||||
break;
|
||||
|
||||
case Tok_Module_Import: {
|
||||
not_implemented( context );
|
||||
// import ...
|
||||
log_failure( "gen::%s: This function is not implemented" );
|
||||
return InvalidCode;
|
||||
}
|
||||
//! Fallthrough intentional
|
||||
case Tok_Attribute_Open:
|
||||
@ -5580,11 +5581,10 @@ CodeVar parser_parse_variable()
|
||||
return result;
|
||||
}
|
||||
|
||||
|
||||
internal
|
||||
CodeTypename parser_parse_type_alt( bool from_template, bool* typedef_is_functon )
|
||||
{
|
||||
|
||||
return InvalidCode;
|
||||
}
|
||||
|
||||
GEN_NS_PARSER_END
|
||||
|
@ -8,16 +8,20 @@
|
||||
|
||||
#pragma region Debug
|
||||
|
||||
#if defined( _MSC_VER )
|
||||
# if _MSC_VER < 1300
|
||||
# define GEN_DEBUG_TRAP() __asm int 3 /* Trap to debugger! */
|
||||
#if GEN_BUILD_DEBUG
|
||||
# if defined( GEN_COMPILER_MSVC )
|
||||
# if _MSC_VER < 1300
|
||||
# define GEN_DEBUG_TRAP() __asm int 3 /* Trap to debugger! */
|
||||
# else
|
||||
# define GEN_DEBUG_TRAP() __debugbreak()
|
||||
# endif
|
||||
# elif defined( GEN_COMPILER_TINYC )
|
||||
# define GEN_DEBUG_TRAP() process_exit( 1 )
|
||||
# else
|
||||
# define GEN_DEBUG_TRAP() __debugbreak()
|
||||
# define GEN_DEBUG_TRAP() __builtin_trap()
|
||||
# endif
|
||||
#elif defined( GEN_COMPILER_TINYC )
|
||||
# define GEN_DEBUG_TRAP() process_exit( 1 )
|
||||
#else
|
||||
# define GEN_DEBUG_TRAP() __builtin_trap()
|
||||
# define GEN_DEBUG_TRAP()
|
||||
#endif
|
||||
|
||||
#define GEN_ASSERT( cond ) GEN_ASSERT_MSG( cond, NULL )
|
||||
@ -37,7 +41,7 @@
|
||||
// NOTE: Things that shouldn't happen with a message!
|
||||
#define GEN_PANIC( msg, ... ) GEN_ASSERT_MSG( 0, msg, ##__VA_ARGS__ )
|
||||
|
||||
#if Build_Debug
|
||||
#if GEN_BULD_DEBUG
|
||||
#define GEN_FATAL( ... ) \
|
||||
do \
|
||||
{ \
|
||||
|
@ -37,7 +37,7 @@ The full definitions of all asts are within:
|
||||
|
||||
* [`ast.hpp`](../base/components/ast.hpp)
|
||||
* [`ast_types.hpp`](../base/components/ast_types.hpp)
|
||||
* [`code_types.hpp`](../base/components/ast_types.hpp)
|
||||
* [`code_types.hpp`](../base/components/code_types.hpp)
|
||||
|
||||
The C/C++ interface procedures are located with `ast.hpp` (for the Code type), and `code_types.hpp` for all others.
|
||||
|
||||
|
@ -12,8 +12,8 @@ gencpp uses a hand-written recursive descent parser. Both the lexer and parser c
|
||||
|
||||
### Lexer
|
||||
|
||||
The lex procedure does the lexical pass of content provided as a `StrC` type.
|
||||
The tokens are stored (for now) in `gen::parser::Lexer_Tokens`.
|
||||
The lex procedure does the lexical pass of content provided as a `StrC` type.
|
||||
The tokens are stored (for now) in `Lexer_Tokens`.
|
||||
|
||||
Fields:
|
||||
|
||||
|
@ -4,7 +4,7 @@
|
||||
|
||||
Contains:
|
||||
|
||||
* [AST_Design](./AST_Design.md): Overvie of ASTs
|
||||
* [AST_Design](./AST_Design.md): Overview of ASTs
|
||||
* [AST Types](./AST_Types.md): Listing of all AST types along with their Code type interface.
|
||||
* [Parsing](./Parsing.md): Overview of the parsing interface.
|
||||
* [Parser Algo](./Parser_Algo.md): In-depth breakdown of the parser's implementation.
|
||||
@ -49,9 +49,9 @@ However, the user may specifiy memory configuration.
|
||||
|
||||
https://github.com/Ed94/gencpp/blob/eea4ebf5c40d5d87baa465abfb1be30845b2377e/base/components/ast.hpp#L396-L461
|
||||
|
||||
*`StringCahced` is a typedef for `StrC` (a string slice), to denote it is an interned string*
|
||||
*`CodeType` is enum taggin the type of code. Has an underlying type of `u32`*
|
||||
*`OperatorT` is a typedef for `EOperator::Type` which has an underlying type of `u32`*
|
||||
*`StringCahced` is a typedef for `String const`, to denote it is an interned string*
|
||||
*`String` is the dynamically allocated string type for the library*
|
||||
|
||||
AST widths are setup to be AST_POD_Size.
|
||||
|
@ -197,11 +197,11 @@ if ( $vendor -match "clang" )
|
||||
$compiler_args += $flag_no_optimization
|
||||
}
|
||||
if ( $debug ) {
|
||||
$compiler_args += ( $flag_define + 'Build_Debug=1' )
|
||||
$compiler_args += ( $flag_define + 'GEN_BUILD_DEBUG=1' )
|
||||
$compiler_args += $flag_debug, $flag_debug_codeview, $flag_profiling_debug
|
||||
}
|
||||
else {
|
||||
$compiler_args += ( $flag_define + 'Build_Debug=0' )
|
||||
$compiler_args += ( $flag_define + 'GEN_BUILD_DEBUG=0' )
|
||||
}
|
||||
|
||||
$warning_ignores | ForEach-Object {
|
||||
@ -277,11 +277,11 @@ if ( $vendor -match "clang" )
|
||||
$compiler_args += $flag_no_optimization
|
||||
}
|
||||
if ( $debug ) {
|
||||
$compiler_args += ( $flag_define + 'Build_Debug=1' )
|
||||
$compiler_args += ( $flag_define + 'GEN_BUILD_DEBUG=1' )
|
||||
$compiler_args += $flag_debug, $flag_debug_codeview, $flag_profiling_debug
|
||||
}
|
||||
else {
|
||||
$compiler_args += ( $flag_define + 'Build_Debug=0' )
|
||||
$compiler_args += ( $flag_define + 'GEN_BUILD_DEBUG=0' )
|
||||
}
|
||||
|
||||
$warning_ignores | ForEach-Object {
|
||||
@ -402,7 +402,7 @@ if ( $vendor -match "msvc" )
|
||||
if ( $debug )
|
||||
{
|
||||
$compiler_args += $flag_debug
|
||||
$compiler_args += ( $flag_define + 'Build_Debug=1' )
|
||||
$compiler_args += ( $flag_define + 'GEN_BUILD_DEBUG=1' )
|
||||
$compiler_args += ( $flag_path_debug + $path_output + '\' )
|
||||
$compiler_args += $flag_link_win_rt_static_debug
|
||||
|
||||
@ -412,7 +412,7 @@ if ( $vendor -match "msvc" )
|
||||
}
|
||||
}
|
||||
else {
|
||||
$compiler_args += ( $flag_define + 'Build_Debug=0' )
|
||||
$compiler_args += ( $flag_define + 'GEN_BUILD_DEBUG=0' )
|
||||
$compiler_args += $flag_link_win_rt_static
|
||||
}
|
||||
$compiler_args += $includes | ForEach-Object { $flag_include + $_ }
|
||||
@ -489,7 +489,7 @@ if ( $vendor -match "msvc" )
|
||||
if ( $debug )
|
||||
{
|
||||
$compiler_args += $flag_debug
|
||||
$compiler_args += ( $flag_define + 'Build_Debug=1' )
|
||||
$compiler_args += ( $flag_define + 'GEN_BUILD_DEBUG=1' )
|
||||
$compiler_args += ( $flag_path_debug + $path_output + '\' )
|
||||
$compiler_args += $flag_link_win_rt_static_debug
|
||||
|
||||
@ -498,7 +498,7 @@ if ( $vendor -match "msvc" )
|
||||
}
|
||||
}
|
||||
else {
|
||||
$compiler_args += ( $flag_define + 'Build_Debug=0' )
|
||||
$compiler_args += ( $flag_define + 'GEN_BUILD_DEBUG=0' )
|
||||
$compiler_args += $flag_link_win_rt_static
|
||||
}
|
||||
$compiler_args += $includes | ForEach-Object { $flag_include + $_ }
|
||||
|
Loading…
Reference in New Issue
Block a user