test: prevent handling in-flight requests after worker was stopped#2668
test: prevent handling in-flight requests after worker was stopped#2668diego-aquino wants to merge 2 commits intomswjs:mainfrom
Conversation
This adds two temporary test cases to help us investigate requests still being handled by workers after they are stopped. Related to mswjs#2597.
|
Hey, @diego-aquino. Thanks for adding these tests! Yes, I highly recommend put this on hold until #2650 is merged. It contains fundamental changes to the internal architecture and how the worker is handled, in particular. While it won't fix this issue, it would be easier to branch the fix from there. |
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
I was investigating #2597 and comparing the reproduction repository with MSW's test suite. The reproduction tests frequently fail due to requests still being handled after the worker was stopped.
However, the existing MSW test checking this behavior does not appear to be flaky:
msw/test/browser/msw-api/setup-worker/stop/in-flight-request.test.ts
Line 31 in 5ab5857
I've created two new test cases to better understand the problem:
bypasses requests not immediately made after the worker was stopped, considering no default responseThis test stops the worker and makes a request in different
page.evaluatecalls. It checks if the request was bypassed by expecting aCannot GETresponse text. From my local executions, it always passes.bypasses requests immediately made after the worker was stopped, considering no default responseThis test stops the worker and makes a request in the same
page.evaluatecall. Similarly to test 1, it expects aCannot GETresponse text, but it shows the same flaky behavior as the reproduction tests, although less frequently.Here's an example of a failure:
Note
I've added a for loop to repeat the new tests and make them more likely to fail. Still, not all test executions cause failures. Run them a couple of times if they continue to pass.
Maybe the overhead of
page.evaluateis masking the issue in the existing test?This pull request does not yet contain any changes to the source code. I've only added tests trying to simulate the issue.
@kettanaito, as I'm new to the codebase, I'm not yet sure about how to fix this. I'm willing to contribute after hearing your feedback and possible courses of action. Thanks!
Closes #2597.