Error in DX11 CPU Texture Share Mode

Home Forums Spout Support Error in DX11 CPU Texture Share Mode

This topic contains 4 replies, has 2 voices, and was last updated by  leadedge 2 months, 1 week ago.

Viewing 5 posts - 1 through 5 (of 5 total)
  • Author
    Posts
  • #4205

    geringsj
    Participant

    Hello,

    using the latest state of Spout on Github, I encounter a problem receiving textures in DX11 CPU Mode.
    The weird thing is, this bug only happens with my application after resizing the texture. With the provided SpoutSender.exe/SpoutReceiver.exe applications, this does not occur. This was tested and happens on AMD and NVIDIA GPUs.

    I uploaded a minimal example of this bug with a bug report to AMD regarding an issue with their drivers when using Spout. You can download the ‘mini_spout_amd.zip’ attachment from here https://community.amd.com/message/2893683 to have a look at my code.
    You need to build a Visual Studio Win64 Solution using CMake, then in ‘mini_spout_amd\src\main.cpp’ at lines 258/259 you ned to set “dx9Mode = false” and “shareMode = 1” to introduce the bug. Starting the application and receiving the broadcastet image with SpoutReceiver.exe shows the behaviour as seen here https://i.ibb.co/VSbX8D1/Lines.png

    Maybe somebody here has an idea what is happening and can help me fix this?

    Also, if somebody has an AMD GPU it would be nice if you can test and confirm the bug I reported to AMD 🙂

    #4206

    leadedge
    Keymaster

    I could build your project successfully and reproduce the skew problem with CPU share mode.

    It appears that the texture width must be a multiple of 64. It’s not a power of 2 problem because a width of 576 works. I had a look through and can’t immediately see what is going on. But I don’t want to go any further with it because CPU texture share will be removed in the next version and memory share mode performance improved instead.

    I cannot reproduce the resize crash, but for some time wglDXUnregisterObjectNV has been a problem when the program exits, but not while it is running. This has been bypassed with an exit flag (line 1160 in SpoutGLDXinterop.cpp – CleanupInterop), but this seems to have been resolved with the more recent NVIDIA drivers. I do not know about AMD cards or why it occurs in your case even while the program is running.

    I am presently re-working the entire project and tracing such things through. Meanwhile you might bypass the problem by releasing and re-creating the sender.

    At line 246 in SpoutSDK (UpdateSender) replace :

    interop.CreateInterop(g_hWnd, sendername, width, height, dwFormat, false); // false means a sender

    with

    ReleaseSender();
    CreateSender(sendername, width, height, dwFormat);

    Try that first. Even if this solves the resize problem, please also remove the exit flag in CleanupInterop :

    // if (!bExit) {
    if (m_hInteropDevice != NULL && m_hInteropObject != NULL) {
    wglDXUnregisterObjectNV(m_hInteropDevice, m_hInteropObject);
    m_hInteropObject = NULL;
    }
    // }

    and also :

    CleanupDirectX(true);

    Now does it crash when the program terminates?

    If not, I can leave out the exit bypass flag for the next release.

    #4208

    geringsj
    Participant

    Regarding the CPU share mode bug, I do not mind if it does not get fixed, because memory share works for me as an alternative.

    Your proposed workaround regarding the AMD Driver bug works for me. A huge thanks for that! Releasing and re-creating the sender does the trick.

    Removing the exit flag from CleanupInterop does not lead to a crash on program exit on my AMD Radeon Pro SSG GPU, both for DX9 and DX11.

    #4209

    geringsj
    Participant

    Since you mention re-working the project: do you see a possibility to send/receive OpenGL depth buffers via the Spout infrastructure? How hard would this be to implement?

    It would be especially helpfull to attach the depth buffer of an FBO to a certain RGBA texture and optimize towards a joint transfer of both textures, i.e. to easily send and receive both textures in a unified and optimized fashion, such that the receiver gets updates to both textures in a consistent manner.

    #4210

    leadedge
    Keymaster

    OK that’s good news, thanks for confirming it with your AMD graphics.

    Regarding depth buffers, I think this could be done but it would not be a trivial task. I would try to avoid changes throughout if possible, but I am not sure what would be involved without looking at it. I would need some practical use, test code and the time to work on it.

Viewing 5 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic.