I am creating a website in which each user has an "avatar". An avatar has different accessories like hats, facial expressions, etc. I have made this previously on a php website but I am using react to create this new website. I am loading in each users avatar and its item links from firestore. I do not want to
use absolute positioning or css, I want the avatar to be one image.
Example of what I am trying to achieve:
I found this library: https://github.com/lukechilds/merge-images which seems to be exactly what I need but I cannot load in external images or I get this error:
Any solutions to this error or suggestions to an alternative would be greatly appreciated.
My code:
render() {
mergeImages([
'http://example.com/images/Avatar.png',
'http://example.com/images/Hat.png',
])
.then((b64) => {
document.querySelector('img.abc').src = b64;
})
.catch(error => console.log(error))
return (
...
<img class="abc" src='' width={100} height={200} alt="avatar"/>
...
); }
The merge-images package has some quirks. One of those quirks is that it expects individual images to either be served from your local server (example: http://localhost:3000/images/head.png, http://localhost:3000/images/eyes.png, and http://localhost:3000/images/mouth.png) or that those individual images be imported into a single file.
Working example: https://github.com/mattcarlotta/merge-images-example (this example includes the first three options explained below with the fourth option utilizing the end result of using a third party CDN)
To run the example, clone the repo:
git clone https://github.com/mattcarlotta/merge-images-example
Change directory:
cd merge-images-example
Then install dependencies:
yarn install
Then run the development server:
yarn dev
Option 1:
The simplest implementation would be to import them into a AvatarFromFiles component. However, as written, it isn't reusable and isn't suitable for dynamically selected avatars.
Option 2:
You may want to serve them from the local server like the AvatarFromLocalServer component with a Webpack Dev Config. Then you would retrieve stored strings from an API and pass them down into from state into the component. Once again, this still requires the images to be present in the images folder, but more importantly, it isn't ideal for a production environment because the images folder must be placed outside of the src folder to be served. This could also lead to security issues. Therefore, I don't recommend this option at all.
Option 3:
Same as Option 1, but lazy loaded like the AvatarFromLazyFiles component and therefore flexible. You can load images by their name; however, it still requires that all of the images be present upon runtime and during production compilation. In other words, what you have on hand is what you get.
Option 4:
So... the ideal option would be to build an image microservice or use a CDN that handles all things images (uploading, manipulating/merging, and serving images). The client would only be able to select/upload new images to this microservice/CDN, while the microservice/CDN handles everything else. This may require a bit more work but offers the most flexibility, super easy to implement, and best performance -- as it offloads all the work from the client to the dedicated service.
In conclusion, unless you plan on having a set amount of images, use option 3, otherwise option 4.
Problem
This is a CORS issue. The images are coming from a different origin that's not your server.
If you look at the source of the library, you'll notice it's using a <canvas> under the hood to merge the images, and then getting the resulting data. Canvas cannot work with images loaded from another domain. There's good reasoning behind this. In essence, loading an image into a canvas is a way to fetch data, and since you can retrieve the data from the canvas as base64, a malicious one could steal information by first loading it into a <canvas> and then pulling it out.
You can read about it directly from the spec for the <canvas> element.
Solution
You need to serve the images either from the same origin (essentially, the same domain) or include Access-Control-Allow-Origin: ... on the HTTP headers that serve the images. There's ways to do this in firebase storage, or other server solutions you might use.
Related
How do I set preload headers in Next.js when the files have random build names?
I've seen from the page source that the static files are preloaded using link tags in the html. I'd much rather have these preloaded in the headers, as that also enables HTTP/2 Server Push.
According to the documentation, you can set custom headers in next.config.js. This is fine, but the main problem is that the file names get random strings every build.
For example, I've also got some local font files that I'd like to push, aswell as the CSS file generated by Tailwind.
How would you set preload headers for the resources in Next.js?
EDIT:
I have managed to hardcode font files in the headers, as these get to keep their random names on rebuild. Tailwind CSS seems to be impossible to hardcode this way, as it gets a new name right after I rebuild. I guess I could modify the build folder in that case, but both of these methods are less than ideal.
Why isn't this a more common issue with people using React/Next.js? As far as I know, Using HTTP/2 Server Push makes everything much faster as long as the server supports it.
Here is a working (but is it efficient?) solution, requiring to setup an Apache2 or NGINX reverse proxy:
Use a custom server.
Intercept the response body, search <link rel=preload> HTML tags,
and set for each link a Link HTTP header.
You could use this library.
Configure the reverse proxy (NGINX or Apache2) to automatically push resources by intercepting Link HTTP headers.
See also : https://github.com/vercel/next.js/issues/2961
Question
Is it possible to precache a file using a different strategy? i.e. Stale While Revalidate?
Or, should I just load the script in the DOM and then add a route for it in the worker with the correct strategy?
Background
This is quite a weird case so I will try to explain it as best I can...
We have two repos; The PWA and The Games
Both are statically hosted on the same CDN
Due to the Games repo being separate, the PWA has no access to the versioning of the game js bundles
Therefore, the solution I have come up with is to generate an unversioned manifest (game-manifest.js) in the Games build
The PWA will then precache this file, loop through it's contents, and append each entry to the existing precache manifest
However, given the game-manifest.js has no revision and is not hashed, we need to apply either a Network First, or Stale While Revalidate strategy in order for the file to be updated when new versions become available
See the following code as a clearer example of what I am trying to do:
import { precacheAndRoute } from 'workbox-precaching';
// Load the game manifest
// THIS FILE NEEDS TO BE PRECACHED, but under the strategy
// of stale while revalidate, or network first.
importScripts('example.cdn.com/games/js/game-manifest.js');
// Something like...
self.__gameManifest.forEach(entry => {
self.__precacheManifest.push({
url: entry
});
});
// Load the assets to be precached
precacheAndRoute(self.__precacheManifest);
Generally speaking, it's not possible to swap in an alternative strategy when using workbox-precaching. It's always going to be cache-first, with the versioning info in the precache manifest controlling how updates take place.
There's a larger discussion of the issue at https://github.com/GoogleChrome/workbox/issues/1767
The recommended course of action is to explicitly set up runtime caching routes using the strategy that you'd prefer, and potentially "prime" the cache by adding entries to it in advance during the install step.
So, i have an interesting situation. I've been working on re-organizing a directory on a website. I updated old files there's about 100 of them, they are in a new location. The old files have been taken down.
The problem I have is there are probably hundreds of people that have bookmarks directly to the URL of the old files. (e.i. "wahwah.com/subSite/pdfs/something.pdf") these files are 5 years old so they need to find the new ones anyways.
So instead of having a page for each individual file, Can I have something in the directory that used to house the files to watch for that URL and redirect to the new page?
It would watch for "wahwah.com/subSite/pdfs.." and redirect. Or maybe something in the main directory of this subSite to watch for the URL to have the /pdf path in it.
I know I can grab URLs in java script but that doesn't help me unless I can do what I stated above. I'm not sure how if at all I could do it in .NET. our servers support .NET because most of our site apps were made with it but I don't deal with those. I cannot use PHP, the servers don't use it.
I'm hoping JavaScript will be able to do it somehow, but it's something i've never tried before so just thinking about it i'm not sure I can. I'm not much for using JS libraries so Im not sure what is out there i've been searching a bit though.
I found Grunt but i'm not entirely sure how it works just yet. Just looking around maybe the file filter or matchBase. or some of the Global patterns.
If you have access to server, your best option is to set up redirect in there on wahwah.com/subSite/pdfs/ directory.
How to do this depends on if you're on IIS or unix.
In asp.net, 301 redirect is fairly efficient.
if (HttpContext.Contains("http://old.aspx"))
{
HttpContext.Current.Response.Status = "301 Moved Permanently";
HttpContext.Current.Response.AddHeader("http://www.new.aspx");
}
Or in page load you can write:
Response.Status = "301 Moved Permanently";
Response.AddHeader("Location","http://new.aspx");
Is there a way to force the clients of a webpage to reload the cache (i.e. images, javascript, etc) after a server has been pushed an update to the code base? We get a lot of help desk calls asking why certain functionality no longer works. A simple hard refresh fixes the problems as it downloads the newly updated javascript file.
For specifics we are using Glassfish 3.x. and JSF 2.1.x. This would apply to more than just JSF of course.
To describe what behavior I hope is possible:
Website A has two images and two javascript files. A user visits the site and the 4 files get cached. As far as I'm concerned, no need to "re-download" said files unless user specifically forces a "hard" refresh or clears their cache. Once a site is pushed an update to one of the files, the server could have some sort of metadata in the header informing the client of said update. If the client chooses, the new files would be downloaded.
What I don't want to do is put meta-tag in the header of a page to force nothing from ever being cached...I just want something that tells the client an update has occurred and it should get the latest once something has been updated. I suppose this would just be some sort of versioning on the client side.
Thanks for your time!
The correct way to handle this is with changing the URL convention for your resources. For example, we have it as:
/resources/js/fileName.js
To get the browser to still cache the file, but do it the proper way with versioning, is by adding something to the URL. Adding a value to the querystring doesn't allow caching, so the place to put it is after /resources/.
A reference for querystring caching: http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.9
So for example, your URLs would look like:
/resources/1234/js/fileName.js
So what you could do is use the project's version number (or some value in a properties/config file that you manually change when you want cached files to be reloaded) since this number should change only when the project is modified. So your URL could look like:
/resources/cacheholder${project.version}/js/fileName.js
That should be easy enough.
The problem now is with mapping the URL, since that value in the middle is dynamic. The way we overcame that is with a URL rewriting module that allowed us to filter URLs before they got to our application. The rewrite watched for URLs that looked like:
/resources/cacheholder______/whatever
And removed the cacheholder_______/ part. After the rewrite, it looked like a normal request, and the server would respond with the correct file, without any other specific mapping/logic...the point is that the browser thought it was a new file (even though it really wasn't), so it requested it, and the server figures it out and serves the correct file (even though it's a "weird" URL).
Of course, another option is to add this dynamic string to the filename itself, and then use the rewrite tool to remove it. Either way, the same thing is done - targeting a string of text during rewrite, and removing it. This allows you to fool the browser, but not the server :)
UPDATE:
An alternative that I really like is to set the filename based on the contents, and cache that. For example, that could be done with a hash. Of course, this type of thing isn't something you'd manually do and save to your project (hopefully); it's something your application/framework should handle. For example, in Grails, there's a plugin that "hashes and caches" resources, so that the following occurs:
Every resource is checked
A new file (or mapping to this file) is created, with a name that is the hash of its contents
When adding <script>/<link> tags to your page, the hashed name is used
When the hash-named file is requested, it serves the original resource
The hash-named file is cached "forever"
What's cool about this setup is that you don't have to worry about caching correctly - just set the files to cache forever, and the hashing should take care of files/mappings being available based on content. It also provides the ability for rollbacks/undos to already be cached and loaded quickly.
i use a no-cache parameter for this situations...
a have a string constant value like (from config file)
$no_cache = "v11";
and in pages, i use assets like
<img src="a.jpg?nc=$no_cache">
and when i update my code, just change the $no_cache value, and it works like a charm.
So, I'm trying to think of the best way to solve a problem I have.
The problem is that I produce many websites for my job and as CSS3 and HTML5 introduce themselves powerful I want to eliminate almost all images from my websites. For button icons and various other things I have a sprite image with all the icons on that I just shift around depending what icon I need. What I need to be able to do is recolour this image dynamically on the web server so that I don't have to open up Photoshop and recolour the icons manually.
I have done some research and the only thing that I've come across that has a chance of working the way I want it to is a Photoshop JavaScript. My question is, once I've written my script and it recolours my icon image, can it be done on a server so, when a user clicks a button for example, the image is recoloured and saved to the server?
Would this require installing Photoshop being installed on the server? Is this even possible?
Photoshop is just available for Mac or Windows as you will know.
As far as i know you can't install Photoshop on Windows Server. (I tried it with CS4 by myself - maybe it works with CS6 knowadays). But you could install PS on a Win 7 machine behind a firewall.
If you use a Windows machine you can use COM for automation. I tried it and it worked well.
I have done a similiar thing you are thinking of with two Macs and PS Javascript (Imagemagick, PIL etc. weren't working for me, because the job was too complicated) on a medium traffic webpage. So i don't agree with Michaels answer.
First thing: think about caching the images and use the low-traffic-time to compute images which could be needed in future. This really made things easier for me.
Second Thing: Experiment with image size, dpi etc. The smaller the images - the faster the process.
My Workflow was:
Webserver is writing to a database ("Hey i need a new image with name "path/bla.jpg").
A Ajax call is checking if the image is present. If not - show "processing your request placeholder"
A script running in a infinite loop on the mac behind a firewall is constanly checking if a new image is needed.
If it finds one it is updating the database ("Mac One will compute this job"). This prevents that every Mac will go for the new image.
The script is calling Photoshop. Photoshop is computing the image.
The script uploads the image (i used rsync) to the webserver.
ajax-call sees the new image and presents it to the user.
Script on the Mac updates database "image successfully created".
You will need some error handling logic etc.
Uhm this problem has been bothering me for years too.. all i always wished was to have a Photoshop Server which i could talk trough an API and get things done.. well.. i have built something that is Closer... using the generator plugin i can connect thought a web-socket and inject javascript in Photoshop.. technically you are able to do anything that can be done using photoshop scripting guide.... (Including manipulating existing PDS)
This library https://github.com/Milewski/generator-exporter exports all the marked layers with a special syntax as its desired format...
this code could run on the server.. using nodejs
import { Generator } from 'generator-exporter'
import * as glob from 'glob'
import * as path from 'path'
const files = glob.sync(
path.resolve(__dirname, '**/*.psd')
);
const generator = new Generator(files, {
password: '123456',
generatorOptions: {
'base-directory': path.resolve(__dirname, 'output')
}
})
generator.start()
.then(() => console.log('Here You Could Grab all the generated images and send back to client....'));
however i wouldn't recommend using this for heavy usage with too many concurrent tasks... because it needs photoshop installed locally... Photoshop GUI will be initialized.. this process is quite slow. so it doesn't really fit for a busy workflow.