I have a Node.js app that is meant to upload a css file to S3 to be used by my website. Everything seems to work fine but when I access the file on my website none of the css changes are being applied. I can even see the css file under 'dev tools/sources'. The css in that file though is not taking effect. If I make any change to the file in dev tools the css starts to immediately all work. If I download the file from s3 and then reupload it manually without changing anything that also works. So something to do with the formatting of me adding the file with Node.js is throwing it off. Any help with this would be greatly appreciated.
let globalStyles = `.header-background{background-color:${themeStyles['headerBackground']};}
//using this to remove backticks from globalStyles
globalStyles = globalStyles.replace(/^`|`$/g, '');
await addFileCssS3(css, `${newUrl}/global.css`, newUrl);
const addFileCssS3 = async (file, key, newUrl) => {
await s3
.putObject({
Body: file,
Bucket: BucketName,
Key: key,
ContentType: 'text/css',
})
.promise()
.catch((error) => {
console.error(error)
})
}
To fix this, you can try removing the newline from the globalStyles string before passing it to the addFileCssS3 method. There are several ways to do this, one is to simply use the .trim() property to remove any leading and trailing whitespace from a string.
For example:
globalStyles = globalStyles.trim();
Or, by using a regular expression to remove the newline character from the end of the globalStyles string:
globalStyles = globalStyles.replace(/\r?\n|\r/g, '');
The second method, using regular expression, should work to remove newlines from both Windows and Linux.
Another possible cause could be the failed call to a function that re-inserts the CSS into the html page, the eventual lack of this function causes the downloaded CSS not to be used.
If this doesn't solve the problem, I recommend checking the Content-Type header of the CSS file when uploaded to S3 to make sure it's set correctly.
Also check if the S3 bucket permissions or browser caching parameters are configured to not allow the use of the updated CSS.
Related
I'm setting up redirects for a client in Next.js. We have a configuration like so inside next.config.js:
{
source: '/page.aspx:urlstr*',
destination: 'https://someurl.com/page.aspx:urlstr*',
permanent: true,
},
The actual URLs hitting this will have a long URL string like so:
page.aspx?woXxJNrRIMKVC109awwTopP+k2NmkvXf+MzijTEc3zIZ3pf4n+Yknq
This is being URL encoded to be:
https://someurl.com/page.aspx?woXxJNrRIMKVC109awwTopP%20k2NmkvXf%20MzijTEc3zIZ3pf4n%20Yknq
The old server hosting these pages at the destination can't handle the URL encoded query string. Is there a way to force Next.js to not parse it?
So the solution was in fact to use the Next.js middleware feature:
https://nextjs.org/docs/advanced-features/middleware
It's a little buggy though. The docs say to add middleware.js to the same level as the pages directory, but you actually need to add _middleware.js inside the pages directory.
Also the matches feature does not seem to work for me at all, so here's my solution:
import { NextResponse } from 'next/server'
export function middleware(request) {
if (request.nextUrl.pathname.startsWith('/page.aspx')) {
let url = new URL(request.url);
return NextResponse.redirect(`https://someurl.com${url.pathname}${url.search}`)
}
}
I need to find the sizes/metadata of externally hosted images in a document (e.g., markdown documents that have image tags in it), but need to do it without actually downloading the image.
Is there any way to do this easily on NodeJS/ExpressJs using javascript? Some of the solutions are many years old and not sure if there are better methods now.
You can do what was suggested in comments by only grabbing the HEAD instead of using a GET when you call the image.
Using got or whatever you like (http, axios, etc) you set the method to HEAD and look for content-length.
My example program that grabs a twitter favicon, headers only, looks like this:
const got = require('got');
(async () => {
try {
const response = await got('https://abs.twimg.com/favicons/twitter.ico', { method: 'HEAD' });
console.log(response.headers);
} catch (error) {
console.log('something is broken. that would be a new and different question.');
}
})();
and in the response I see the line I need:
'content-length': '912'
If the server doesn't respect HEAD or doesn't return a content-length header, you are probably out of luck.
I'm using EPUB.js and Vue to render an Epub. I want to display the cover images of several epub books so users can click one to then see the whole book.
There's no documentation on how to do this, but there are several methods that indicate that this should be possible.
First off, there's Book.coverUrl() method.
Note that I'm setting an img src property equal to bookCoverSrc in the Vue template. Setting this.bookCoverSrc will automatically update the src of the img tag and cause an image to display (if the src is valid / resolves).
this.book = new Epub(this.epubUrl, {});
this.book.ready.then(() => {
this.book.coverUrl().then((url) => {
this.bookCoverSrc = url;
});
})
The above doesn't work. url is undefined.
Weirdly, there appears to be a cover property directly on book. So, I try:
this.book = new Epub(this.epubUrl, {});
this.book.ready.then(() => {
this.coverSrc = this.book.cover;
});
this.book.cover resolves to OEBPS/#public#vhost#g#gutenberg#html#files#49010#49010-h#images#cover.jpg, so at least locally when I set it to a src results in a request to http://localhost:8080/OEBPS/#public#vhost#g#gutenberg#html#files#49010#49010-h#images#cover.jpg, which 200s but returns no content. Probably a quirk of webpack-dev-server to 200 on that, but if I page through sources in Chrome dev tools I also don't see any indicate that such a URL should resolve.
So, docs not helping. I googled and found this github question from 2015. Their code is like
$("#cover").attr("src", Book.store.urlCache[Book.cover]);
Interesting, nothing in the docks about Book.store.urlCache. As expected, urlCache is undefined, though book.store exists. I don't see anything on there that can help me display a cover image though.
Using epub.js, how can I display a cover image of an Epub file? Note that simply rendering the first "page" of the Epub file (which is usually the cover image) doesn't solve my problem, as I'd like to list a couple epub files' cover images.
Note also that I believe the epub files I'm using do have cover images. The files are Aesop's Fables and Irish Wonders.
EDIT: It's possible I need to use Book.load on the url provided by book.cover first. I did so and tried to console.log it, but it's a massive blog of weirdly encoded text that looks something like:
����
So I think it's an image straight up, and I need to find a way to get that onto the Document somehow?
EDIT2: that big blobby blob is type: string, and I can't atob() or btoa() it.
EDIT3: Just fetching the url provided by this.book.cover returns my index.html, default behavior for webpack-dev-server when it doesn't know what else to do.
EDIT4: Below is the code for book.coverUrl from epub.js
key: "coverUrl",
value: function coverUrl() {
var _this9 = this;
var retrieved = this.loaded.cover.then(function (url) {
if (_this9.archived) {
// return this.archive.createUrl(this.cover);
return _this9.resources.get(_this9.cover);
} else {
return _this9.cover;
}
});
return retrieved;
}
If I use this.archive.createUrl(this.cover) instead of this.resources.get, I actually get a functional URL, that looks like blob:http://localhost:8080/9a3447b7-5cc8-4cfd-8608-d963910cb5f5. I'll try getting that out into src and see what happens.
The reason this was happening to me was because the functioning line of code in the coverUrl function was commented out in the source library epub.js, and a non-functioning line of code was written instead.
So, I had to copy down the entire library, uncomment the good code and delete the bad. Now the function works as it should.
To do so, clone down the entire epub.js project. Copy over the dependencies in that project's package.json to your own. Then, take the src, lib, and libs folders and copy them somewhere into your project. Find a way to disable eslint for the location you put these folders into because the project uses TAB characters for spacing which caused my terminal to hang due to ESLINT exploding.
npm install so you have your and epub.js dependencies in your node_modules.
Open book.js. Uncomment line 661 which looks like
return this.archive.createUrl(this.cover);
and comment out line 662 which looks like
// return this.resources.get(this.cover);
Now you can display an image by setting an img tag's src attribute to the URL returned by book.coverUrl().
this.book = new Epub(this.epubUrl, {});
this.book.ready.then(() => {
this.book.coverUrl().then((url) => {
this.bookCoverSrc = url;
});
})
The thing I want to build is that by clicking a button I want to trigger the direct print of a PDF file, but without opening or viewing it.
I have PDF as blob file returned from fetch API.
I tried a lot of examples but don't know exactly how to do it
Some examples tried:
// In my case, I had only blobData from PDF, but you can ignore this and set fileURL directly if it is not yours.
const file = new Blob([blobData], { type: 'application/pdf' });
const fileURL = window.URL.createObjectURL(file);
// As the URL is dynamic, we must set src here, but if it's not, just leave it direct on the HTML.
// Also, be careful with React literals. If you do this in a <iframe> defined in JSX, it won't work
// as React will keep src property undefined.
window.frames["my-frame"].src = fileURL;
// Then print:
window.frames["my-frame"].print();
<iframe name="my-frame" id="my-frame" title="my-frame" style="display: none"></iframe>
Also tried library, Print.js: http://printjs.crabbly.com/.
Is there way to print the pdf without visually opening it to the user?
We should support only Chrome browser.
Can someone provide example how to do it in React, Redux application?
try this print-js and this is npm package
install npm package
npm install print-js --save
Add following code
import print from "print-js";
const fileURL = "someurl.com/document.pdf";
const handlePrint = (e) => {
e.preventDefault();
print(fileURL);
};
similar question is here
I am looking for way how to create a export file for external application, which accepts diacritics encoding only in Windows-1252. User should be able to download file through web application and import it to external application. I am using nodejs at backend.
Problem is that nodejs do not support encodings not even similar to 1252, so special characters like ľščťžýáíé are problem. Is there some workaround or way how to create file at frontend(after AJAX request) in required encoding?
EDIT:
Encoding was poor specificated by application owner. As #robertklep said, windows-1252 was bad encoding. I tryed lot of different encodings and a proper one was a CP-1250. Using a iconv-lite I created this example solution.
total.js
const iconv = require('iconv-lite);
exports.install = function() {
F.route('/route/to/export', download, ['get']);
};
function download() {
var self = this;
var text = 'some important content on multiple lines';
var content = iconv.encode(text, 'CP1250');
return self.res.content(200, content, 'text/plain', true, {
'Content-Disposition': 'attachment; filename="export.txt"'
});
}
frontend
$('#btn-export').on('click', function(){
location.href = '/route/to/export';
});
This is working well for me, but can someone find out better solution for file transfer from backend?