* pad out (non-packed) struct fields when lowering to bytes to be
saved in the binary - prior to this change, fields would be
saved at non-aligned addresses leading to wrong accesses
* add a matching test case to `behavior/struct.zig` tests
* fix offset to field calculation in `struct_field_ptr` on `x86_64`
AstGen:
* rename the known_has_bits flag to known_non_opv to make it better
reflect what it actually means.
* add a known_comptime_only flag.
* make the flags take advantage of identifiers of primitives and the
fact that zig has no shadowing.
* correct the known_non_opv flag for function bodies.
Sema:
* Rename `hasCodeGenBits` to `hasRuntimeBits` to better reflect what it
does.
- This function got a bit more complicated in this commit because of
the duality of function bodies: on one hand they have runtime bits,
but on the other hand they require being comptime known.
* WipAnonDecl now takes a LazySrcDecl parameter and performs the type
resolutions that it needs during finish().
* Implement comptime `@ptrToInt`.
Codegen:
* Improved handling of lowering decl_ref; make it work for
comptime-known ptr-to-int values.
- This same change had to be made many different times; perhaps we
should look into merging the implementations of `genTypedValue`
across x86, arm, aarch64, and riscv.
It is the job of codegen backends to mark Decls that are referenced as
alive so that the frontend does not sweep them with the garbage. This
commit unifies the code between the backends with an added method on
Decl.
The implementation is more complete than before, switching on the Decl
val tag and recursing into sub-values.
As a result, two more array tests are passing.
* stage2: put decls in different MachO sections
Use `getDeclVAddrWithReloc` when targeting MachO backend rather than
`getDeclVAddr` - this fn returns a zero vaddr and instead creates a
relocation on the linker side which will get automatically updated
whenever the target decl is moved in memory. This fn also records
a rebase of the target pointer so that its value is correctly slid
in presence of ASLR.
This commit enables `zig test` on x86_64-macos.
* stage2: fix output section selection for type,val pairs
const locals now detect if the value ends up being comptime known. In
such case, it replaces the runtime AIR instructions with a decl_ref
const.
In the backends, some more sophisticated logic for marking decls as
alive was needed to prevent Decls incorrectly being garbage collected
that were indirectly referenced in such manner.
* Introduce a mechanism into Sema for emitting a compile error when an
integer is too big and we need it to fit into a usize.
* Add `@intCast` where necessary
* link/MachO: fix an unnecessary allocation when all that was happening
was appending zeroes to an ArrayList.
* Add `error.Overflow` as a possible error to some codepaths, allowing
usage of `math.intCast`.
closes#9710
* incorporate Andrew's MIR draft as Mir.zig
* add skeleton for Emit.zig module - Emit will lower MIR into
machine code or textual ASM.
* implement push
* implement ret
* implement mov r/m, r
* implement sub r/m imm and sub r/m, r
* put encoding common ops together - some ops share impl such as
MOV and cmp so put them together and vary the actual opcode
with modRM ext only.
* implement pop
* implement movabs - movabs being a special-case of mov not
handled by general mov MIR instruction due to requirement to
handle 64bit immediates.
* store imm64 as a struct `Imm64{ msb: u32, lsb: u32 }` in extra data
for use with for instance movabs inst
* implement more mov variations
* implement adc
* implement add
* implement sub
* implement xor
* implement and
* implement or
* implement sbb
* implement cmp
* implement lea - lea doesn't follow the scheme as other inst above. Similarly, I
think bit shifts and rotates should be put in a separate basket too.
* implement adc_scale_src
* implement add_scale_src
* implement sub_scale_src
* implement xor_scale_src
* implement and_scale_src
* implement or_scale_src
* implement sbb_scale_src
* implement cmp_scale_src
* implement adc_scale_dst
* implement add_scale_dst
* implement sub_scale_dst
* implement xor_scale_dst
* implement and_scale_dst
* implement or_scale_dst
* implement sbb_scale_dst
* implement cmp_scale_dst
* implement mov_scale_src
* implement mov_scale_dst
* implement adc_scale_imm
* implement add_scale_imm
* implement sub_scale_imm
* implement xor_scale_imm
* implement and_scale_imm
* implement or_scale_imm
* implement sbb_scale_imm
* implement cmp_scale_imm
* port bin math to MIR
* backpatch stack size into prev MIR inst
* implement Function.gen() (minus dbg info)
* implement jmp/call [imm] - we can now call functions using indirect absolute
addressing, or via registers.
* port airRet to use MIR
* port airLoop to use MIR
* patch up performReloc to use inst indices
* implement conditional jumps (without relocs)
* implement set byte on condition
* implement basic lea r64, [rip + imm]
* implement calling externs
* implement callq in PIE
* implement lea RIP in PIE context
* remove all refs to Encoder from CodeGen
* implement basic imul ops
* pass all Linux tests!
* enable most of dbg info gen
* generate arg dbg info in Emit
The main problem that motivated these changes is that global constants
which are referenced by pointer would not be emitted into the binary.
This happened because `semaDecl` did not add `codegen_decl` tasks for
global constants, instead relying on the constant values being copied as
necessary. However when the global constants are referenced by pointer,
they need to be sent to the linker to be emitted.
After making global const arrays, structs, and unions get emitted, this
uncovered a latent issue: the anonymous decls that they referenced would
get garbage collected (via `deleteUnusedDecl`) even though they would
later be referenced by the global const.
In order to solve this problem, I introduced `anon_work_queue` which is
the same as `work_queue` except a lower priority. The `codegen_decl`
task for anon decls goes into the `anon_work_queue` ensuring that the
owner decl gets a chance to mark its anon decls as alive before they are
possibly deleted.
This caused a few regressions, which I made the judgement call to add
workarounds for. Two steps forward, one step back, is still progress.
The regressions were:
* Two behavior tests having to do with unions. These tests were
intentionally exercising the LLVM constant value lowering, however,
due to the bug with garbage collection that was fixed in this commit,
the LLVM code was not getting exercised, and union types/values were
not implemented correctly, due to me forgetting that LLVM does not
allow bitcasting aggregate values.
- This is worked around by allowing those 2 test cases to regress,
moving them to the "passing for stage1 only" section.
* The test-stage2 test cases (in test/cases/*) for non-LLVM backends
previously did not have any calls to lower struct values, but now
they do. The code that was there was just `@panic("TODO")`. I
replaced that code with a stub that generates the wrong value. This
is an intentional miscompilation that will obviously need to get
fixed before any struct behavior tests pass. None of the current
tests we have exercise loading any values from these global const
structs, so there is not a problem until we try to improve these
backends.
* Fix backend using wrong union field of the slice instruction.
* LLVM backend properly sets alignment on global variables.
* Sema: add coercion for *T to *[1]T
* Sema: pointers to Decls with explicit alignment now have alignment
metadata in them.
AIR:
* div is renamed to div_trunc.
* Add div_float, div_floor, div_exact.
- Implemented in Sema and LLVM codegen. C backend has a stub.
Improvements to std.math.big.Int:
* Add `eqZero` function to `Mutable`.
* Fix incorrect results for `divFloor`.
Compiler-rt:
* Add muloti4 to the stage2 section.
* Restructure elemPtr a bit
* New AIR instruction: slice_elem_ptr, which returns a pointer to an element of a slice
* Value: adapt elemPtr to work on slices
* New AIR instruction: slice, which constructs a slice out of a pointer
and a length.
* AstGen: use `coerced_ty` for start and end expressions, use `none`
for the sentinel, and don't try to load the result of the slice
operation because it returns a by-value result.
* Sema: pointer arithmetic is extracted into analyzePointerArithmetic
and it is used by the implementation of slice.
- Also I implemented comptime pointer addition.
* Sema: extract logic into analyzeSlicePtr, analyzeSliceLen and use them
inside the slice semantic analysis.
- The approach in stage2 is much cleaner than stage1 because it uses
more granular analysis calls for obtaining the slice pointer, doing
arithmetic on it, and checking if the length is comptime-known.
* Sema: use the slice Value Tag for slices when doing coercion from
pointer-to-array.
* LLVM backend: detect when emitting a GEP instruction into a
pointer-to-array and add the extra index that is required.
* Type: ptrAlignment for c_void returns 0.
* Implement Value.hash and Value.eql for slices.
* Remove accidentally duplicated behavior test.
* std.os: take advantage of `@minimum`. It's probably time to
deprecate `std.min` and `std.max`.
* New AIR instructions: min and max
* Introduce SIMD vector support to stage2
* Add `@Type` support for vectors
* Sema: add `checkSimdBinOp` which can be re-used for other arithmatic
operators that want to support vectors.
* Implement coercion from vectors to arrays.
- In backends this is handled with bitcast for vector to array,
however maybe we want to reduce the amount of branching by
introducing an explicit AIR instruction for it in the future.
* LLVM backend: implement lowering vector types
* Sema: Implement `slice.ptr` at comptime
* Value: improve `numberMin` and `numberMax` to support floats in
addition to integers, and make them behave properly in the presence
of NaN.
* apply late symbol resolution for globals - instead of resolving
the exact location of a symbol in locals, globals or undefs,
we postpone the exact resolution until we have a full picture
for relocation resolution.
* fixup stubs to defined symbols - this is currently a hack rather
than a final solution. I'll need to work out the details to make
it more approachable. Currently, we preemptively create a stub
for a lazy bound global and fix up stub offsets in stub helper
routine if the global turns out to be undefined only. This is quite
wasteful in terms of space as we create stub, stub helper and lazy ptr
atoms but don't use them for defined globals.
* change log scope to .link for macho.
* remove redundant code paths from Object and Atom.
* drastically simplify the contents of Relocation struct (i.e., it is
now a simple superset of macho.relocation_info), clean up relocation
parsing and resolution logic.
* Add AIR instructions: ret_ptr, ret_load
- This allows Sema to be blissfully unaware of the backend's decision
to implement by-val/by-ref semantics for struct/union/array types.
Backends can lower these simply as alloc, load, ret instructions,
or they can take advantage of them to use a result pointer.
* Add AIR instruction: array_elem_val
- Allows for better codegen for `Sema.elemVal`.
* Implement calculation of ABI alignment and ABI size for unions.
* Before appending the following AIR instructions to a block,
resolveTypeLayout is called on the type:
- call - return type
- ret - return type
- store_ptr - elem type
* Sema: fix memory leak in `zirArrayInit` and other cleanups to this
function.
* x86_64: implement the full x86_64 C ABI according to the spec
* Type: implement `intInfo` for error sets.
* Type: implement `intTagType` for tagged unions.
The Zig type tag `Fn` is now used exclusively for function bodies.
Function pointers are modeled as `*const T` where `T` is a `Fn` type.
* The `call` AIR instruction now allows a function pointer operand as
well as a function operand.
* Sema now has a coercion from function body to function pointer.
* Function type syntax, e.g. `fn()void`, now returns zig tag type of
Pointer with child Fn, rather than Fn directly.
- I think this should probably be reverted. Will discuss the lang
specs before doing this. Idea being that function pointers would
need to be specified as `*const fn()void` rather than `fn() void`.
LLVM backend:
* Enable calling the panic handler (previously this just
emitted `@breakpoint()` since the backend could not handle the panic
function).
* Implement sret
* Introduce `isByRef` and implement it for structs and arrays. Types
that are `isByRef` are now passed as pointers to functions, and e.g.
`elem_val` will return a pointer instead of doing a load.
* Move the function type creating code from `resolveLlvmFunction` to
`llvmType` where it belongs; now there is only 1 instance of this
logic instead of two.
* Add the `nonnull` attribute to non-optional pointer parameters.
* Fix `resolveGlobalDecl` not using fully-qualified names and not using
the `decl_map`.
* Implement `genTypedValue` for pointer-like optionals.
* Fix memory leak when lowering `block` instruction and OOM occurs.
* Implement volatile checks where relevant.
Also improve the LLVM backend to support lowering bigints to LLVM
values.
Moves over a bunch of math.zig test cases to the "passing for stage2"
section.
* Remove the builtins `@addWithSaturation`, `@subWithSaturation`,
`@mulWithSaturation`, and `@shlWithSaturation` now that we have
first-class syntax for saturating arithmetic.
* langref: Clarify the behavior of `@shlExact`.
* Ast: rename `bit_shift_left` to `shl` and `bit_shift_right` to `shr`
for consistency.
* Air: rename to include underscore separator with consistency with
the rest of the ops.
* Air: add shl_exact instruction
* Use non-extended tags for saturating arithmetic, to keep it
simple so that all the arithmetic operations can be done the same
way.
- Sema: unify analyzeArithmetic with analyzeSatArithmetic
- implement comptime `+|`, `-|`, and `*|`
- allow float operands to saturating arithmetic
* `<<|` allows any integer type for the RHS.
* C backend: fix rebase conflicts
* LLVM backend: reduce the amount of branching for arithmetic ops
* zig.h: fix magic number not matching actual size of C integer types
- adds initial support for the operators +|, -|, *|, <<|, +|=, -|=, *|=, <<|=
- uses operators in addition to builtins in behavior test
- adds binOpExt() and assignBinOpExt() to AstGen.zig. these need to be audited
* AIR: add `mod` instruction for modulus division
- Implement for LLVM backend
* Sema: implement `@mod`, `@rem`, and `%`.
* Sema: fix comptime switch evaluation
* Sema: implement comptime shift left
* Sema: fix the logic inside analyzeArithmetic to handle all the
nuances between the different mathematical operations.
- Implement comptime wrapping operations
* AIR: add `get_union_tag` instruction
- implement in LLVM backend
* Sema: implement == and != for union and enum literal
- Also implement coercion from union to its own tag type
* Value: implement hashing for union values
The motivating example is this snippet:
comptime assert(@typeInfo(T) == .Float);
This was the next blocker for stage2 building compiler-rt.
Now it is switch at compile-time on an integer.
* AIR instructions struct_field_ptr and related functions now are also
emitted by the frontend for unions. Backends must inspect the type
of the pointer operand to lower the instructions correctly.
- These will be renamed to `agg_field_ptr` (short for "aggregate") in
the future.
* Introduce the new `set_union_tag` AIR instruction.
* Introduce `Module.EnumNumbered` and associated `Type` methods. This
is for enums which have no decls, but do have the possibility of
overriding the integer tag type and tag values.
* Sema: Implement support for union tag types in both the
auto-generated and explicitly-provided cases, as well as explicitly
provided enum tag values in union declarations.
* LLVM backend: implement lowering union types, union field pointer
instructions, and the new `set_union_tag` instruction.
* prepare compiler-rt to support being compiled by stage2
- put in a few minor workarounds that will be removed later, such as
using `builtin.stage2_arch` rather than `builtin.cpu.arch`.
- only try to export a few symbols for now - we'll move more symbols
over to the "working in stage2" section as they become functional
and gain test coverage.
- use `inline fn` at function declarations rather than `@call` with an
always_inline modifier at the callsites, to avoid depending on the
anonymous array literal syntax language feature (for now).
* AIR: replace floatcast instruction with fptrunc and fpext for
shortening and widening floating point values, respectively.
* Introduce a new ZIR instruction, `export_value`, which implements
`@export` for the case when the thing to be exported is a local
comptime value that points to a function.
- AstGen: fix `@export` not properly reporting ambiguous decl
references.
* Sema: handle ExportOptions linkage. The value is now available to all
backends.
- Implement setting global linkage as appropriate in the LLVM
backend. I did not yet inspect the LLVM IR, so this still needs to
be audited. There is already a pending task to make sure the alias
stuff is working as intended, and this is related.
- Sema almost handles section, just a tiny bit more code is needed in
`resolveExportOptions`.
* Sema: implement float widening and shortening for both `@floatCast`
and float coercion.
- Implement the LLVM backend code for this as well.
There were two things to resolve here:
* Snektron's branch edited Zir printing, but in master branch
I moved the printing code from Zir.zig to print_zir.zig. So that
just had to be moved over.
* In master branch I fleshed out coerceInMemory a bit more, which
caused one of Snektron's test cases to fail, so I had to add
addrspace awareness to that. Once I did that the tests passed again.
* introduce float_to_int and int_to_float AIR instructionts and
implement for the LLVM backend and C backend.
* Sema: implement `zirIntToFloat`.
* Sema: implement `@atomicRmw` comptime evaluation
- introduce `storePtrVal` for when one needs to store a Value to a
pointer which is a Value, and assert it happens at comptime.
* Value: introduce new functionality:
- intToFloat
- numberAddWrap
- numberSubWrap
- numberMax
- numberMin
- bitwiseAnd
- bitwiseNand (not implemented yet)
- bitwiseOr
- bitwiseXor
* Sema: hook up `zirBitwise` to the new Value bitwise implementations
* Type: rename `isFloat` to `isRuntimeFloat` because it returns `false`
for `comptime_float`.
* langref: add some more "see also" links for atomics
* Add the following AIR instructions
- atomic_load
- atomic_store_unordered
- atomic_store_monotonic
- atomic_store_release
- atomic_store_seq_cst
- atomic_rmw
* Implement those AIR instructions in LLVM and C backends.
* AstGen: make the `ty` result locations for `@atomicRmw`, `@atomicLoad`,
and `@atomicStore` be `coerced_ty` to avoid unnecessary ZIR
instructions when Sema will be doing the coercions redundantly.
* Sema for `@atomicLoad` and `@atomicRmw` is done, however Sema for
`@atomicStore` is not yet implemented.
- comptime eval for `@atomicRmw` is not yet implemented.
* Sema: flesh out `coerceInMemoryAllowed` a little bit more. It can now
handle pointers.