Is Vue.js Incompatible With Serving POST Requests? - javascript

I'm trying to do [thing] with a Vue.js project created with Vue CLI. [Thing] is not super-important to this question, so I'll omit it for the sake of brevity. I've noticed when I run the local web server for this project with
$ npm run serve
that GET requests work just fine; but POST requests give me a 404 - "Cannot POST". I need to be able to do both.
Using Express, it's straightforward to serve the same page with both GET and POST by simply adding router.post(...) in addition to the default router.get(...). However, in Vue.js this seems difficult.
I've spent some time playing with Vue Router, but poring over the documentation I haven't found a configuration option to tell it "Here's how to respond to a POST request" - it seems to require GET.
But maybe I'm trying to pound a square peg into a round hole. Vue.js is geared toward applications that run in the browser, and browsers send GET requests (For the moment I'm not interested in form submissions...) where POST requests tend to be more of a web app/integration/back end kind of a thing.
What do you guys think - is there something obvious I'm missing, or should I do this the "easy" way and switch to Express?
UPDATE: My "problem" is not Vue.js, specifically - it's the binary vue-cli-service, which definitely listens for GETs, but not POSTs. (GETs from Postman succeed; POSTs from Postman fail.) If I build for deployment, webpack turns the project into HTML/JS/CSS, which is served by a different web server and POSTs work just fine - it's just in dev mode where vue-cli-service is serving my application locally that I can't use POST requests.
Is there an undocumented way to make vue-cli-service respond to POST requests? I've scoured the documentation but haven't found anything. I'm not sure how to make another web server serve a Vue.js project, because the webpack configuration is...complex.

Vue Router is not receiving a (GET) request and responding, it is simply reading the current URL and inserting the corresponding component. So in short, no, there is no POST request handler... I'd argue it's not even handling GET requests either, just reading the URL which looks like a GET request.
If you are trying to POST between pages inside your app, Vuex is what you want.
If you are trying to POST to your app from outside, having an actual server listening for requests which you can ping will be easier (ie Express).
There may be a way to use Axios to do this from your app. It can listen to responses from POST requests, so if it were listening I don't see why it couldn't receive. However, I suspect you'd have to listen to a port from the machine where your app is running which would be a major security issue (if a client's browser/OS/Antivirus even let you).

It's been nearly two weeks since I posted this question, but I didn't even post the question until I'd been struggling with the problem on my own for a week. I finally came to a solution after much head-desking and more dead ends than it would be healthy for my blood pressure to recount. Here it is:
Deciding perhaps my best bet was to look at the vue-cli source code, I happened to notice it includes documentation
The README.md file under docs/config is a bit sparse-looking in what it says about the devserver option, but it also mentions that All options for webpack-dev-server are supported. Ooh.
The Webpack documentation shows a devserver.before option that allows you to access the Express app object and add your own custom middleware to it
This allows you to intercept the POST and redirect it as a GET
Which this guy, who was having the exact same problem as I was, ultimately did. (Note that he used devserver.setup, which does the same thing, but is deprecated in favor of devserver.before.) In vue.config.js, include
devServer: {
before: function(app) {
app.post('/about.html', function(req, res) {
res.redirect('/about.html');
});
},
}

Related

Cloudscraper does not solve normal JS check Cloudflare challenge

I´m using the cloudscraper package (PyPI, Github) for web requests on a site that is protected with Cloudflare.
I am well aware there are challenges that can´t be solved yet with this package, particularly the "v2 challenges" with recaptchas and so on.
However, for me, the package seems to not work at all. When I do a GET request with
s.get(my_url)
where s is a Cloudscraper session object, I often get a HTML page with this title: "Attention Required! | Cloudflare".
This is the standard Cloudflare Javascript challenge, which just checks whether the browser supports JS.
I don´t know why this happens. I made sure that
I have a 'realistic' User agent set, with Chrome set as browser argument in the
cloudscraper.CloudScraper()
constructor.
request are timed and not too fast, I wait between requests
I have all the package requirements installed, meaning besides cloudscraper itself, requests, requests-toolbelt, and js2py as engine.
There is no issues section on the Github repo.
The Javascript check is the simplest challenge that Cloudflare can throw at us. Still, this package, which has the sole purpose of solving some Cloudflare challenges, fails to even get past this simple check.
What am I overlooking? Cloudflare makes web automation a nightmare...
EDIT: Also, the Cloudflare page says 'Please enable Cookies and reload the page.' although normally cookies are automatically accepted by the request session´s RequestsCookieJar.

Why is my client side code being compiled and ran on node backend?

I'm new to SSR so I'm not sure if this problem and my solution is standard practices but I can't imagine so.
My goal is to have a dynamic page that allows users to add/remove items on the page. I originally programmed this component with the intention of it only being a client side react project but now I want to put it on a server. Now when I'm translating my code to the new project I've run into a couple errors now that has to do with my backend running the code that is only supposed to be run on client side.
For instance I ran into this problem earlier React Redux bundle.js being thrown into request, and I was able to solve this issue with a Janky solution where I make sure that it's being passed client side code and stop the execution of its being passed from the backend. Now I've had to refactor my code to not use the fetch() function because it's not a function that the node backend recognizes and it's now complaining about my use of the document object because that's not a thing in node either.
I can keep on going importing new modules to fix the errors to keep my website from crashing but I feel like I'm on a small boat patching up new holes with duck tape waiting to find the next hole.
Here's an image of my config if that's necessary I also have additional images in my previous stack overflow question (link aobove)
For the bundle.js issue I am not sure to understand why it happens.
For the fetch issue, I think this is a common problem with SSR and if you implement it by yourself you need to handle conditions in different places of your app:
if(!!window) {
// do client-side stuff like accessing
// window.document
}
Basically the most common usage of SSR is to handle the first execution of you app on the server side. This include:
Route resolution
Fetching data (using nodejs http module)
Hydrating stores (if you use redux or other data library)
rendering UI
Once the execution is done, your server returns the bundled js app with hydrated store and UI and returns it to the client. Subsequent requests or route update will be executed on the client side, so you can directly use fetch or react-router
The pros of doing SSR are:
Great first contentful
Great for SEO
Client-side machine do less works
There is a lot of libraries that can help you with SSR as well as frameworks like nextjs, use-http

Why Next.js API?

Im starting with Next and I'm kind of confused about how it works the SSR and the API. When should I use the API folder in pages instead of having my own server with a database? Does Next server for SSR will have any conflict if I had my server in Node for example?
The point is to provide a simple, scalable alternative to running your own server.
When you deploy API routes via Vercel, it will provision AWS Lambda functions on the backend to form your API.
These functions are sort of like individual snips of code that get run on demand when you have traffic.
You're right in the sense that there isn't too much difference. Just an alternative way to deploy your API. The main purpose is to make it easy and reduce the management associated with running a server.
For most use cases, it should work well but please note it doesn't have support for Websockets.
You're free to use API routes, or your own server. It doesn't matter and won't impact on the SSR.

Single page web app (SPA) link/refresh routing design considerations and caching

What is the recommended practice of sending the SPA code to the client with routing considerations? How to send the SPA code to the client when they go directly to a link (e.g. website.com/users/user1) rather than the root path first.
An example best illustrates the question:
The path website.com/users/user1 responds with some application/JSON so that the SPA can fill in the layout with the information.
Let's say that you have a basic SPA and you are making a request on website.com/users/user1 (assume no auth is required) without having first visited the root path where it is clear that we can send the SPA code (html, css, javascript) to the client and then they can make their requests to the different routes through the web app. So the user visits websitename.com/users/user1 and the server doesn't know whether the client needs all of the SPA code first or just the JSON (maybe the most recent version is cached, or they are visiting website.com/users/user1 after having first visited website.com/ which knows to make a specific request to the server and ask for the JSON).
How is this typically handled? Is a flag, date, or something else sent to the webserver with the SPA so that the server knows the client has the SPA code? This could be done via the SPA requesting a content type of application/json on the route rather than a standard GET request? Or setting a header that the SPA sends back denoting its most recent version (this way we can use caching and if it isn't the most recent, or there is no SPA yet, a new version may be sent).
How is it recommended that the SPA handle this? Does the SPA check the URI and note that it has only just received the SPA code from the server and not the actual content from the server (e.g., user1's information). And how is it recommended that we check this? The server sends back the SPA code and sets a header denoting that the SPA needs to make another request to website.com/user/user1 to actually retrieve the actual JSON of user1's info rather than the SPA code.
EDIT: I have eventually come across this SO question and the answer more or less addresses all of my questions: How to make a SPA SEO crawlable? There are obviously many ways to handle this on both client and server side and I wanted to see the different ways people addressed the issue. (I like the way that the aforementioned question/answer deals with the issue and will likely use a similar scheme.)
I don't know what your stack is, and I can't really speak to recommended practices, but I use webpack for this. In webpack, you accomplish this by defining multiple entry points. This splits your code into different self-contained packs, so you can have different .html files that produce different code bundles.
Adapting your situation to the appropriate webpack config:
{
entry: {
main: "./index
user1: "./users/user1",
},
output: {
path: path.join(__dirname, "dist"),
filename: "[name].entry.js"
}
}
Then, in the appropriate html, you'd load user1.entry.js
I am not sure how extensible this is for a situation where each of your users has their own dedicated URL (obviously you can't name a unique entry point for hundreds of users), but at that point I'm not sure what you have is technically an SPA.
You can also consider using a routing solution similar to react-router, which allows you to grab data from the URL. eg, w/ a webpack config like above, navigating to example.com/users/user1:
users.html/users.js:
loadJSONFromServer({url-user-id})
I believe what you are asking is, how does a user visit a page that is not the home page (maybe through a link that was shared) and the app get the data it is supposed to display on that page.
They way I typically accomplish this is to initiate a data fetch in the lifecycle method componentDidMount that evaluates what data is already present, and fills in the missing pieces. You could accomplish something similar through react-router's onEnter hook (which is how I handle user auth in an SPA).

Logging xhr requests while doing e2e-tests with protractor

I'm doing e2e-tests for an app whose frontend is written in AngularJS, and these tests would typically involve filling in forms, sending the data to the backend, then updating the page and making sure that the data persists. The tests are written in protractor.
One of these tests fails, inconsistently and for no apparent reason, so I would like to get as much information for debugging as possible. So I’ve been wondering whether it’s possible at all to log the xhr POST requests that my frontend is sending to the backend during the test in question, or better yet, whether the data that are being sent by the browser can be captured and examined from within protractor? Perhaps, using the browser object? I googled, and googled, but without success.
Yes, I realise that e2e-tests are intended only to interact with the interface and that ajax requests are too low-level for such kind of tests. Yes, perhaps stubbing the whole backend out and just testing the frontend would have been much better. But please humor me. Is it possible to get the information about what is being posted by the browser to the server during e2e-tests with protractor?
Protractor uses the webdriverjs API to "drive" the browser, so it won't have access to any more information than any other Selenium webdriver app would have. See the docs here: http://docs.seleniumhq.org/docs/03_webdriver.jsp#selenium-webdriver-api-commands-and-operations
Outside of some APIs for controlling the browser (adding cookies, opening new tabs), most of the functionality in Protractor and WebdriverJS comes from running snippets of JavaScript in the browser (e.g., to inspect the DOM). So, I don't think any of that qualifies for intercepting communications between the browser and the server.
I think you might have luck using the Protractor infrastructure for injecting code/modules into the app start (this is the best doc I can find for this feature). You should be able to inject a module that can interpose on the $http calls an log them as they go (or, of course, fully mock them out).

Categories

Resources