-
-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add Cypress #1094
base: master
Are you sure you want to change the base?
feat: add Cypress #1094
Conversation
|
✅ Deploy Preview for modest-rosalind-098b67 ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
✅ Deploy Preview for asyncapi-studio-design-system ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
✅ Deploy Preview for studio-next ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
@Amzani @KhudaDad414 |
I started with the addition of @Amzani
|
Not for the existing studio, we only have one for the new improvements that will be integrated once we finish the NextJS migration: https://www.figma.com/proto/bPMB3lkMTOOMuKk0oGLuNm/Studio?type=design&node-id=96-2392&scaling=contain&page-id=0%3A1&starting-point-node-id=2%3A2
I vote for
The goal is regression test, so I would suggest full flow (e2e), we can encourage components testing for new feature we add to the project. @KhudaDad414 WDY? |
I would remove CodeMirror Integration from the scope of this issue for now, until we merge everything. |
880b47e
to
5d9397e
Compare
I'm working on code generation for
Code, responsible for this is It needs to be checked on the server what version of |
@aeworxet we should remove the dependency with https://github.com/asyncapi/server-api in the NextJS version of studio as this repository might be deprecated soon. So the idea is to directly use the generator library. Not all of them supports V3, AFAIK the following supports V3
If < V3 all the generators should work as expected. |
@Amzani would you mind linking to where that decission has been made? Just to be in sync with asyncapi/server-api#576 where the idea, afaik, was to keep a server but the code living in CLI repo. |
e683688
to
cf031ba
Compare
@smoya I was assuming that if only Studio uses this API there is no need to maintain it anymore thus deprecating the |
@KhudaDad414 suggests the opposite, to stick with Should I invest time into rewriting the logic for SSR, or should I just bugfix the current approach? (#1094 (comment) is still relevant then) |
@aeworxet IMHO it makes sense to bug fix the current code base. By that I mean sticking to the server-api approach. As Fran said in the original issue, migration is already hard enough, trying to implement new features at the same time would quadruple the complexity. |
@KhudaDad414 |
@aeworxet Since you get the error on the current Studio as well, I would consider resolving it to be out of scope for this Bounty Issue. We can have another issue and discuss it over there. |
I'm investigating options for deployment of The most runtime-dynamic part of Studio currently is the generation of templates, and it can be done through a POST request to an API from a static website (it is made this way currently, in fact.) I'm not sure OG preview generation is a mandatory functionality for a containerized Studio, it can be left reserved for https://studio.asyncapi.com I have deployed to https://asyncapi-studio-studio-next.vercel.app a build of The whole 'export' build is nine (9) Mb, it has First Contentful Paint Should this version of Studio's build be used for a Docker container? |
We want to enable long term features like authentication for instance. generation of templates using API to POST should be something we could migrate IMO as we don't want to maintain an entire repository just for 1 single
How complex is having everything in Docker image? |
e1862f7
to
2319b09
Compare
It is supposed to be straightforward but I ran into an error on a command that executes fine on desktop, so I'm investigating it. |
1044afc
to
1ceb4e8
Compare
770323f
to
1ceb4e8
Compare
ff6bf4c
to
a07f335
Compare
Quality Gate passedIssues Measures |
apps/studio-next/Dockerfile
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMHO it is better to follow https://turbo.build/repo/docs/guides/tools/docker and create a docker image that way. it takes forever to install the dependencies (more than 30 minutes). I think it's because we are not leveraging turbo prune
to get rid of the unwanted packages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another point: It takes forever to run post-install scripts of Cypress (I think it installs all of the browsers with each install, like Puppeteer). there should be a way to disable that when building a container. something like --ignore-scripts
in npm. 🤔
apps/studio-next/package.json
Outdated
@@ -6,7 +6,14 @@ | |||
"dev": "next dev -p 3001", | |||
"build": "next build", | |||
"start": "next start", | |||
"lint": "echo 'No linting configured'" | |||
"lint": "next lint", | |||
"docker:build": "docker build -t asyncapi/studio:latest .", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
docker build should be done at the root of the project (so turborepo is able to prune the unwanted packages). maybe something like: "build:next-studio:docker": "docker build -t asyncapi/studio:latest -f StudioNextDockerfile ."
at the root.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Simply making sure that Turborepo will not start to be turned into a monolith again after this change.
Might it be better, just in case, to keep possibility to dockerize each app separately with Docker Compose, like it's done in https://github.com/dhaaana/turborepo-with-docker:
- https://github.com/dhaaana/turborepo-with-docker/blob/main/docker-compose.yml
- https://github.com/dhaaana/turborepo-with-docker/blob/main/apps/api/Dockerfile
- https://github.com/dhaaana/turborepo-with-docker/blob/main/apps/web/Dockerfile
?
(meaning Dockerfile
will stay where it is now, but Docker Compose
setup will be added)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it's possible to publish the separate docker image after this, I have no issue with it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
turbo prune studio-next --docker
copies to ./out
's node_modules
containing pnpm
's symlinks, which, after being copied, immediately become invalid (because they point to previous relative destinations,) but doing pnpm install
each time there's a need to build a Docker image is not an option either. So I'm trying different configurations to achieve local caching in at least some form, and each build takes time. Thus delay.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed approach in favor of pushing into the Docker image the already compiled Studio
.
I took advantage of turbo build
cache, and now if there is a fresh turbo
's build cache, then the generation of the Docker image takes not even seconds but milliseconds.
Please try running
turbo build
turbo run build:studio-next:docker
in the root of the repo and check.
The generated Docker image (became 294MB) is run with
docker run -p 3001:3001 asyncapi/studio
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Upon some reflection I think my tone might have come across as too critical and for that I apologize.
My intentions were to provide technical feedback not to criticize you or your efforts personally.
I know you have worked hard on the issue and I don't think it was just for you to be suspended in the bounty program or not getting the reward for this bounty issue.
If you think it's best to remove the docker and merge the remaining code, let's do that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe you should clone this branch from scratch and do in the repo's root
pnpm install docker builder prune -a # <-- remove Docker's build cache turbo build:studio-next:docker
?
Just built again with these steps
I cleaned the project reinstalled the packages and built the image with turbo build:studio-next:docker
The size now shows up correctly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think about reusing the same compiled code for two simultaneous deployments now?
release process can have two simultaneous deployments:
- Build and deploy to the studio.asyncapi.com.
- Reuse already built binaries (with
turbo
cache) to generate (push them into) a Docker image and deploy it to the Docker hub.
This process will guarantee that there is one same Studio
, built from one same commit, with the same hashes, in both places.
(I already have a situation right now when the same bug gets reproduced in one place, doesn't in another, and gives a third error in the local dev)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think about reusing the same compiled code for two simultaneous deployments now?
Just wanted to clarify all my concerns. If they are invalid we can definitely go ahead with this approach.
Building in One Environment and Running in Another Is Generally a Bad Idea
As you know, we are using linux to run random workflows. However, when it comes to publishing or testing, we are using a matrix of (Linux, Windows, and Mac) to build, publish or test our code.
Why? Because we want to make sure that it is built within the environment that it is going to run on. How can you be sure that if you build something on Linux, it will run on Windows? You might argue that they are different since docker has it's own Engine on top of the OS and it will work. You might be right, but we need to test that theory. Is it worth it to go though the trouble, I don't think so.
All I know is that some packages are platform-dependent. By installing in the Docker image itself, we make sure that all our packages are installed, built, and run in the same environment.
That's why building in the Docker image itself is always better.
Transparency
When I look at the Dockerfile, I should be able to see that yes, I am actually downloading the source code and building it myself on my machine. This gives me peace of mind that nobody is pushing some fishy code into my environment. Most of the ecosystem now doesn't trust pre-built packages because of incidents like the XZ fiasco.
Docker Cache
As I mentioned previously, Docker is really smart when it comes to building and running your image. If you follow best practices, it can help your package be as slim as possible.
For example, if you haven't changed your package.json or lock file, why would you want to install your packages from scratch? Well, you don't want that. You want to get the source code, which is probably a few MB, use the cached version of the package installation layer, and build on top of that, right? Well, Docker is going to do exactly that. But if you are pushing an already built 290MB of code in a single layer, Docker won't be able to do anything. Even if the next version only changes by one byte, it has to invalidate that layer and install the entire 290MB of code again.
So in a Dockerfile, it is always better to break down your build, installation, and run layers as much as you can. More layers mean Docker is able to cache more efficiently when building
and when running
.
Harder to Maintain
It is hard to maintain a separate and custom workflow to do things. People expect simplicity. If I want to build an image, I would naturally expect the docker build
command to work. why does it have to be dependent on other processes like installation and building?
If you build one more layer of abstraction on top and use a script, it is suspicious to say the least. Contributors/Users will have to check what exactly you are doing in that script and why they can't just build it with a simple docker build
command.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for investing time in explaining all that.
I now understand that I should have taken specific end users' needs into account also, not only aim at easing the release process for the 'vendor.'
The Docker build should definitely be developed by someone who understands Docker better.
Removed Docker-related code and changed the PR's description accordingly.
@@ -0,0 +1,6 @@ | |||
import { NextResponse } from 'next/server'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if we can use Server Actions instead 🤔 ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I searched the Internet back and forth and couldn't find answers to several simple questions.
You seem to be very knowledgeable in Next.js Server's matters, so I would like to consult with you on this topic.
Server Actions are meant for performing data fetching and mutations on the Next.js Server.
To my understanding, this implies that Server ALREADY exists as an independent entity, in order for a Server Action to have an environment to be executed in.
In this particular case, right now, full Server consists of one API route.
Thus, if this one and only API route is replaced by a Server Action, the Server will simply vanish, and Server Action will have no place to be executed in.
Besides, this API route is responsible for Server Response, which requires exactly the next/server
package, and not for data fetching from a third-party resource or a mutation.
Taking this into account, does Server Action even make sense in this case, or should the structure with one API route remain (for now, the final architecture of the Studio
's Server-Side is not defined yet, and this route is added only as a Proof Of Concept and a stub in place where the future Server will be?)
Please correct me if I'm getting something wrong in the new for me concept of Server Actions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I searched the Internet back and forth and couldn't find answers to several simple questions. You seem to be very knowledgeable in Next.js Server's matters, so I would like to consult with you on this topic.
I still new to this Reace Server Components (RSC), so I will answers as much as I know
Server Actions are meant for performing data fetching and mutations on the Next.js Server. To my understanding, this implies that Server ALREADY exists as an independent entity, in order for a Server Action to have an environment to be executed in. In this particular case, right now, full Server consists of one API route. Thus, if this one and only API route is replaced by a Server Action, the Server will simply vanish, and Server Action will have no place to be executed in. Besides, this API route is responsible for Server Response, which requires exactly the
next/server
package, and not for data fetching from a third-party resource or a mutation.Taking this into account, does Server Action even make sense in this case, or should the structure with one API route remain (for now, the final architecture of the
Studio
's Server-Side is not defined yet, and this route is added only as a Proof Of Concept and a stub in place where the future Server will be?)
The thing about Api Route which their rebranding as Route Handler is not supposed to be replaced by Server Actions but different type options in the Architecture which would be considered during the design
https://www.youtube.com/watch?v=PtX5PF17tlQ
Please correct me if I'm getting something wrong in the new for me concept of Server Actions.
Server Actions are supposed to work on top of the React Server Component(RSC) but the majority of the applications inside StudioWrapper do not leverage RSC even for NextJS features like SSR/SSG do not work there
Static Side Generator -> render JSX to HTML during built time
Server Side Rendering -> render JSX to HTML at server when requested by a Client (give what u need only)
This one of the good resources Dan Abramov explains what are the responsibilities of SSR(What NextJS did) and RSC, and how they work together interchangeably
reactwg/server-components#5
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jerensl
Thanks.
@KhudaDad414
Would it be safe to assume that Server Actions should not be used currently?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@aeworxet you are right, we first need to decide how we are going to architecture the component for the app
directory.
@KhudaDad414
|
@aeworxet they should be ignored. |
Disabled ESLint's rules on warnings and the error. |
SonarCloud is frozen. Sergio was dealing with this issue several days ago |
Can this PR be merged, or is there anything else that needs to be done on it? |
@aeworxet could you fix the conflict ? |
Quality Gate passedIssues Measures |
@Amzani |
I had to rename
which was renamed recently in In the change's PR only the name was changed, so all functionality is supposed to remain the same, but pinging @jonaslagoni just in case. |
This PR introduces for
./apps/studio-next
:Cypress
with testing minimal set of web application's UI elements due to upcoming migration to./apps/design-system
(skeleton of the UI testing that will need to be reviewed after migration)npm run cy:open
npm run cy:e2e:*
(refer to thecy:e2e:*
part ofpackage.json
for a list of available browsers (edge
is reserved for Windows machines))API route
http://localhost:3001/api/v1/generate
that only returns requests, just as a proof of concept that the server side is functioning (works only in a local dev environment; the deployed version is subject to CORS restrictions)Deployment to Vercel is successful and can be viewed at https://asyncapi-studio-studio-next.vercel.app
Related to #661
Related to #1101