Previously, the MetalFX scaling modes were only displayed in the
`macos` and `ios` feature tag overrides if the editor had Metal support
enabled. However, this is only available on the macOS editor, which caused
two issues:
- You couldn't set the 3D scaling mode to MetalFX for `macos` or `ios`
if you were using the editor on another platform.
- If you opened a project that was last edited on macOS with MetalFX scaling
modes set for these overrides, it would show an unknown value or revert
to the default when saving to the project (as the enum value didn't exist
anymore on your end).
In order to make CommandQueueMT more maintainable this PR changes the
previous macro hell with variadic templates instead. This makes the
class far more explicit and will allow us to more easily change the way
the class functions in the future.
Furthermore this refactoring has allowed for some optimizations. In
particular by using std::forward to delay the decision of decaying the
type to as late as possible we are able to move the data from the
callsite into our Command buffer and later move it to the call.
In practice what this means is that compared to the old version instead
of copying values 3 times, we can now get away with 1 copy, and 1 move
for lvalues, and just 2 moves for rvalues. This saves quite a few
operations in a hot codepath.
We also now test to make sure that the amount of copies and moves are
what we expect. This way we can spot performance regressions in this
code easily.
Somewhat unscientifically, running TPS-demo by pressing enter and not
touching the controls average mspf, repeatable across many runs:
before: 6.467
after : 6.202
This PR updates the `DisplayServer::virtual_keyboard_get_height()` method to return 0 with a warning instead of throwing an error when the virtual keyboard is not supported.
Moves the initialization of the geo_normal value
to after the user shader code, so that the normal
supplied by the user is actually used instead of
the interpolated vertex normal.
- Tweak property hint ranges for some networking settings to ensure
the minimum values don't break the debugger entirely.
- Ensure shader time rollover is set to at least 1, as 0 causes a division by
zero to occur.
All relevant project settings are now covered by a range hint.
Remove default value from `volume_linear` property documentation
Remove `volume_linear` internal property from `AudioBusLayout`
Update doc/classes/AudioEffectAmplify.xml [no ci]
Co-authored-by: A Thousand Ships <96648715+AThousandShips@users.noreply.github.com>
Make documentation more concise [no ci]
Fixes two errors related to the normal, tangent,
and bitangent vectors, namely normals not always
being inverted on backfaces, and normalization
being reversed from what MikkTSpace expects.
These misaligned accesses are shown in all of our CI hooks. It turned
out to not be difficult to fix.
It is likely that this will improve performance for aarch64.
Fixes#90017Fixes#90030Fixes#98044
This PR makes the following changes:
# Force processing of GPU commands for frame_count frames
The variable `frames_pending_resources_for_processing` is added to track
this.
The ticket #98044 suggested to use `_flush_and_stall_for_all_frames()`
while minimized.
Technically this works and is a viable solution.
However I noticed that this issue was happening because Logic/Physics
continue to work "business as usual" while minimized(\*). Only Graphics
was being deactivated (which caused commands to accumulate until window
is restored).
To continue this behavior of "business as usual", I decided that GPU
work should also "continue as usual" by buffering commands in a double
or triple buffer scheme until all commands are done processing (if they
ever stop coming). This is specially important if the app specifically
intends to keep processing while minimized.
Calling `_flush_and_stall_for_all_frames()` would fix the leak, but it
would make Godot's behavior different while minimized vs while the
window is presenting.
\* `OS::add_frame_delay` _does_ consider being minimized, but it just
throttles CPU usage. Some platforms such as Android completely disable
processing because the higher level code stops being called when the app
goes into background. But this seems like an implementation-detail that
diverges from the rest of the platforms (e.g. Windows, Linux & macOS
continue to process while minimized).
# Rename p_swap_buffers for p_present
**This is potentially a breaking change** (if it actually breaks
anything, I ignore. But I strongly suspect it doesn't break anything).
"Swap Buffers" is a concept carried from OpenGL, where a frame is "done"
when `glSwapBuffers()` is called, which basically means "present to the
screen".
However it _also_ means that OpenGL internally swaps its internal
buffers in a double/triple buffer scheme (in Vulkan, we do that
ourselves and is tracked by `RenderingDevice::frame`).
Modern APIs like Vulkan differentiate between "submitting GPU work" and
"presenting".
Before this PR, calling `RendererCompositorRD::end_frame(false)` would
literally do nothing. This is often undesired and the cause of the leak.
After this PR, calling `RendererCompositorRD::end_frame(false)` will now
process commands, swap our internal buffers in a double/triple buffer
scheme **but avoid presenting to the screen**.
Hence the rename of the variable from `p_swap_buffers` to `p_present`
(which slightly alters its behavior).
If we want `RendererCompositorRD::end_frame(false)` to do nothing, then
we should not call it at all.
This PR reflects such change: When we're minimized **_and_**
`has_pending_resources_for_processing()` returns false, we don't call
`RendererCompositorRD::end_frame()` at all.
But if `has_pending_resources_for_processing()` returns true, we will
call it, but with `p_present = false` because we're minimized.
There's still the issue that Godot keeps processing work (logic,
scripts, physics) while minimized, which we shouldn't do by default. But
that's work for follow up PR.
The work was performed by collaboration of TheForge and Google. I am
merely splitting it up into smaller PRs and cleaning it up.
This is the most "risky" PR so far because the previous ones have been
miscellaneous stuff aimed at either [improve
debugging](https://github.com/godotengine/godot/pull/90993) (e.g. device
lost), [improve Android
experience](https://github.com/godotengine/godot/pull/96439) (add Swappy
for better Frame Pacing + Pre-Transformed Swapchains for slightly better
performance), or harmless [ASTC
improvements](https://github.com/godotengine/godot/pull/96045) (better
performance by simply toggling a feature when available).
However this PR contains larger modifications aimed at improving
performance or reducing memory fragmentation. With greater
modifications, come greater risks of bugs or breakage.
Changes introduced by this PR:
TBDR GPUs (e.g. most of Android + iOS + M1 Apple) support rendering to
Render Targets that are not backed by actual GPU memory (everything
stays in cache). This works as long as load action isn't `LOAD`, and
store action must be `DONT_CARE`. This saves VRAM (it also makes
painfully obvious when a mistake introduces a performance regression).
Of particular usefulness is when doing MSAA and keeping the raw MSAA
content is not necessary.
Some GPUs get faster when the sampler settings are hard-coded into the
GLSL shaders (instead of being dynamically bound at runtime). This
required changes to the GLSL shaders, PSO creation routines, Descriptor
creation routines, and Descriptor binding routines.
- `bool immutable_samplers_enabled = true`
Setting it to false enforces the old behavior. Useful for debugging bugs
and regressions.
Immutable samplers requires that the samplers stay... immutable, hence
this boolean is useful if the promise gets broken. We might want to turn
this into a `GLOBAL_DEF` setting.
Instead of creating dozen/hundreds/thousands of `VkDescriptorSet` every
frame that need to be freed individually when they are no longer needed,
they all get freed at once by resetting the whole pool. Once the whole
pool is no longer in use by the GPU, it gets reset and its memory
recycled. Descriptor sets that are created to be kept around for longer
or forever (i.e. not created and freed within the same frame) **must
not** use linear pools. There may be more than one pool per frame. How
many pools per frame Godot ends up with depends on its capacity, and
that is controlled by
`rendering/rendering_device/vulkan/max_descriptors_per_pool`.
- **Possible improvement for later:** It should be possible for Godot
to adapt to how many descriptors per pool are needed on a per-key basis
(i.e. grow their capacity like `std::vector` does) after rendering a few
frames; which would be better than the current solution of having a
single global value for all pools (`max_descriptors_per_pool`) that the
user needs to tweak.
- `bool linear_descriptor_pools_enabled = true`
Setting it to false enforces the old behavior. Useful for debugging bugs
and regressions.
Setting it to false is required when workarounding driver bugs (e.g.
Adreno 730).
A ridiculous optimization. Ridiculous because the original code
should've done this in the first place. Previously Godot was doing the
following:
1. Create a command buffer **pool**. One per frame.
2. Create multiple command buffers from the pool in point 1.
3. Call `vkBeginCommandBuffer` on the cmd buffer in point 2. This
resets the cmd buffer because Godot requests the
`VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT` flag.
4. Add commands to the cmd buffers from point 2.
5. Submit those commands.
6. On frame N + 2, recycle the buffer pool and cmd buffers from pt 1 &
2, and repeat from step 3.
The problem here is that step 3 resets each command buffer individually.
Initially Godot used to have 1 cmd buffer per pool, thus the impact is
very low.
But not anymore (specially with Adreno workarounds to force splitting
compute dispatches into a new cmd buffer, more on this later). However
Godot keeps around a very low amount of command buffers per frame.
The recommended method is to reset the whole pool, to reset all cmd
buffers at once. Hence the new steps would be:
1. Create a command buffer **pool**. One per frame.
2. Create multiple command buffers from the pool in point 1.
3. Call `vkBeginCommandBuffer` on the cmd buffer in point 2, which is
already reset/empty (see step 6).
4. Add commands to the cmd buffers from point 2.
5. Submit those commands.
6. On frame N + 2, recycle the buffer pool and cmd buffers from pt 1 &
2, call `vkResetCommandPool` and repeat from step 3.
**Possible issues:** @dariosamo added `transfer_worker` which creates a
command buffer pool:
```cpp
transfer_worker->command_pool =
driver->command_pool_create(transfer_queue_family,
RDD::COMMAND_BUFFER_TYPE_PRIMARY);
```
As expected, validation was complaining that command buffers were being
reused without being reset (that's good, we now know Validation Layers
will warn us of wrong use).
I fixed it by adding:
```cpp
void RenderingDevice::_wait_for_transfer_worker(TransferWorker
*p_transfer_worker) {
driver->fence_wait(p_transfer_worker->command_fence);
driver->command_pool_reset(p_transfer_worker->command_pool); //
! New line !
```
**Secondary cmd buffers are subject to the same issue but I didn't alter
them. I talked this with Dario and he is aware of this.**
Secondary cmd buffers are currently disabled due to other issues (it's
disabled on master).
- `bool RenderingDeviceCommons::command_pool_reset_enabled`
Setting it to false enforces the old behavior. Useful for debugging bugs
and regressions.
There's no other reason for this boolean. Possibly once it becomes well
tested, the boolean could be removed entirely.
Adds `command_bind_render_uniform_sets` and
`add_draw_list_bind_uniform_sets` (+ compute variants).
It performs the same as `add_draw_list_bind_uniform_set` (notice
singular vs plural), but on multiple consecutive uniform sets, thus
reducing graph and draw call overhead.
- `bool descriptor_set_batching = true;`
Setting it to false enforces the old behavior. Useful for debugging bugs
and regressions.
There's no other reason for this boolean. Possibly once it becomes well
tested, the boolean could be removed entirely.
Godot currently does the following:
1. Fill the entire cmd buffer with commands.
2. `submit()`
- Wait with a semaphore for the swapchain.
- Trigger a semaphore to indicate when we're done (so the swapchain
can submit).
3. `present()`
The optimization opportunity here is that 95% of Godot's rendering is
done offscreen.
Then a fullscreen pass copies everything to the swapchain. Godot doesn't
practically render directly to the swapchain.
The problem with this is that the GPU has to wait for the swapchain to
be released **to start anything**, when we could start *much earlier*.
Only the final blit pass must wait for the swapchain.
TheForge changed it to the following (more complicated, I'm simplifying
the idea):
1. Fill the entire cmd buffer with commands.
2. In `screen_prepare_for_drawing` do `submit()`
- There are no semaphore waits for the swapchain.
- Trigger a semaphore to indicate when we're done.
3. Fill a new cmd buffer that only does the final blit to the
swapchain.
4. `submit()`
- Wait with a semaphore for the submit() from step 2.
- Wait with a semaphore for the swapchain (so the swapchain can
submit).
- Trigger a semaphore to indicate when we're done (so the swapchain
can submit).
5. `present()`
Dario discovered this problem independently while working on a different
platform.
**However TheForge's solution had to be rewritten from scratch:** The
complexity to achieve the solution was high and quite difficult to
maintain with the way Godot works now (after Übershaders PR).
But on the other hand, re-implementing the solution became much simpler
because Dario already had to do something similar: To fix an Adreno 730
driver bug, he had to implement splitting command buffers. **This is
exactly what we need!**. Thus it was re-written using this existing
functionality for a new purpose.
To achieve this, I added a new argument, `bool p_split_cmd_buffer`, to
`RenderingDeviceGraph::add_draw_list_begin`, which is only set to true
by `RenderingDevice::draw_list_begin_for_screen`.
The graph will split the draw list into its own command buffer.
- `bool split_swapchain_into_its_own_cmd_buffer = true;`
Setting it to false enforces the old behavior. This might be necessary
for consoles which follow an alternate solution to the same problem.
If not, then we should consider removing it.
PR #90993 added `shader_destroy_modules()` but it was not actually in
use.
This PR adds several places where `shader_destroy_modules()` is called
after initialization to free up memory of SPIR-V structures that are no
longer needed.