AstGen:
* rename the known_has_bits flag to known_non_opv to make it better
reflect what it actually means.
* add a known_comptime_only flag.
* make the flags take advantage of identifiers of primitives and the
fact that zig has no shadowing.
* correct the known_non_opv flag for function bodies.
Sema:
* Rename `hasCodeGenBits` to `hasRuntimeBits` to better reflect what it
does.
- This function got a bit more complicated in this commit because of
the duality of function bodies: on one hand they have runtime bits,
but on the other hand they require being comptime known.
* WipAnonDecl now takes a LazySrcDecl parameter and performs the type
resolutions that it needs during finish().
* Implement comptime `@ptrToInt`.
Codegen:
* Improved handling of lowering decl_ref; make it work for
comptime-known ptr-to-int values.
- This same change had to be made many different times; perhaps we
should look into merging the implementations of `genTypedValue`
across x86, arm, aarch64, and riscv.
This commit updates stage2 to enforce the property that the syntax
`fn()void` is a function *body* not a *pointer*. To get a pointer, the
syntax `*const fn()void` is required.
ZIR puts function alignment into the func instruction rather than the
decl because this way it makes it into function types. LLVM backend
respects function alignments.
Struct and Union have methods `fieldSrcLoc` to help look up source
locations of their fields. These trigger full loading, tokenization, and
parsing of source files, so should only be called once it is confirmed
that an error message needs to be printed.
There are some nice new error hints for explaining why a type is
required to be comptime, particularly for structs that contain function
body types.
`Type.requiresComptime` is now moved into Sema because it can fail and
might need to trigger field type resolution. Comptime pointer loading
takes into account types that do not have a well-defined memory layout
and does not try to compute a byte offset for them.
`fn()void` syntax no longer secretly makes a pointer. You get a function
body type, which requires comptime. However a pointer to a function body
can be runtime known (obviously).
Compile errors that report "expected pointer, found ..." are factored
out into convenience functions `checkPtrOperand` and `checkPtrType` and
have a note about function pointers.
Implemented `Value.hash` for functions, enum literals, and undefined values.
stage1 is not updated to this (yet?), so some workarounds and disabled
tests are needed to keep everything working. Should we update stage1 to
these new type semantics? Yes probably because I don't want to add too
much conditional compilation logic in the std lib for the different
backends.
There are some differences vs. the union encoding in the LLVM backend:
- Tagged unions with a 0-bit payload do not become their tag type. Instead,
they are a struct with an empty `union` as their payload field.
- We do not order the `payload`/`tag` storage based on their alignment
An attempt to normalize some of the function names in build.zig. Normalize add*Dir to add*Path. Also use "Library" instead of the "Lib" abbreviation.
The PR does not remove the old names, only adds the new normalized ones to faciliate a transition period.
The size of a GUID is not platform-dependent, it's always a fixed number of bits. So I've updated guid to use fixed bit integer types rather than platform-dependent C integer types.
* AstGen: use Ast.zig helper methods to avoid copy pasting token counting logic
- take advantage of the `first_doc_comment` field we already have for
param AST nodes
* Add missing ZIR docs
There are some restrictions here.
- We either need C11 or a compiler that supports the aligned attribute
- We cannot provide align less than the type's natural C alignment.
Looking at the BufferedWriter assembly generated, one can see that is
has to do a lot of work, just to copy over some bytes and increase an
offset. This is because the LinearFifo is a much more general construct
than what BufferedWriter needs and the optimizer cannot prove that we
don't need to do this extra work.
Replaces the inflate API from `inflateStream(reader: anytype, window_slice: []u8)` to
`decompressor(allocator: mem.Allocator, reader: anytype, dictionary: ?[]const u8)` and
`compressor(allocator: mem.Allocator, writer: anytype, options: CompressorOptions)`
Read bytes to check expected values instead of reading and hashing them.
Hashing is a waste of time when we can just read and compare.
This also removes a dependency on std.crypto.hash.sha2.Sha256 for tests.
If there is a big atom available for re-use in the free list, and
it's the last atom in section, it's ideal capacity might span the
entire section in which case we do not want to calculate the actual
end VM addr of the symbol since it may overflow. Instead, we just take
the max capacity available as end VM addr estimate. In this case,
the max capacity equals `std.math.maxInt(u64)`.
Instead of using `push` and `pop` combo, we now re-use our stack
allocation mechanism which means we don't have to worry about
16-byte stack adjustments on macOS as it is handled automatically
for us. Another benefit is that we don't have to backpatch stack
offsets when pulling args from the stack.
The previous commit that implemented doc comment zir support for
decls did not properly account for all the possible attribute
keyword combinations (threadlocal, extern, and such).
Previously, optional slices returned the pointer size as abi size.
We now account for slices to calculate the correct size which is abi-alignment + slice ABI size.
To generate better code for tuples, we detect a tuple operand in
storePtr, and analyze field loads and stores directly. This avoids
an extra allocation + memcpy which would occur if we used `coerce`.
When asking a struct or union whether the type requires comptime, it may
need to ask itself recursively, for example because of a field which is
a pointer to itself. This commit adds a field to each to keep track of
when computing the "requires comptime" value and returns `false` if the
check is already ongoing.