So, I'm trying to think of the best way to solve a problem I have.
The problem is that I produce many websites for my job and as CSS3 and HTML5 introduce themselves powerful I want to eliminate almost all images from my websites. For button icons and various other things I have a sprite image with all the icons on that I just shift around depending what icon I need. What I need to be able to do is recolour this image dynamically on the web server so that I don't have to open up Photoshop and recolour the icons manually.
I have done some research and the only thing that I've come across that has a chance of working the way I want it to is a Photoshop JavaScript. My question is, once I've written my script and it recolours my icon image, can it be done on a server so, when a user clicks a button for example, the image is recoloured and saved to the server?
Would this require installing Photoshop being installed on the server? Is this even possible?
Photoshop is just available for Mac or Windows as you will know.
As far as i know you can't install Photoshop on Windows Server. (I tried it with CS4 by myself - maybe it works with CS6 knowadays). But you could install PS on a Win 7 machine behind a firewall.
If you use a Windows machine you can use COM for automation. I tried it and it worked well.
I have done a similiar thing you are thinking of with two Macs and PS Javascript (Imagemagick, PIL etc. weren't working for me, because the job was too complicated) on a medium traffic webpage. So i don't agree with Michaels answer.
First thing: think about caching the images and use the low-traffic-time to compute images which could be needed in future. This really made things easier for me.
Second Thing: Experiment with image size, dpi etc. The smaller the images - the faster the process.
My Workflow was:
Webserver is writing to a database ("Hey i need a new image with name "path/bla.jpg").
A Ajax call is checking if the image is present. If not - show "processing your request placeholder"
A script running in a infinite loop on the mac behind a firewall is constanly checking if a new image is needed.
If it finds one it is updating the database ("Mac One will compute this job"). This prevents that every Mac will go for the new image.
The script is calling Photoshop. Photoshop is computing the image.
The script uploads the image (i used rsync) to the webserver.
ajax-call sees the new image and presents it to the user.
Script on the Mac updates database "image successfully created".
You will need some error handling logic etc.
Uhm this problem has been bothering me for years too.. all i always wished was to have a Photoshop Server which i could talk trough an API and get things done.. well.. i have built something that is Closer... using the generator plugin i can connect thought a web-socket and inject javascript in Photoshop.. technically you are able to do anything that can be done using photoshop scripting guide.... (Including manipulating existing PDS)
This library https://github.com/Milewski/generator-exporter exports all the marked layers with a special syntax as its desired format...
this code could run on the server.. using nodejs
import { Generator } from 'generator-exporter'
import * as glob from 'glob'
import * as path from 'path'
const files = glob.sync(
path.resolve(__dirname, '**/*.psd')
);
const generator = new Generator(files, {
password: '123456',
generatorOptions: {
'base-directory': path.resolve(__dirname, 'output')
}
})
generator.start()
.then(() => console.log('Here You Could Grab all the generated images and send back to client....'));
however i wouldn't recommend using this for heavy usage with too many concurrent tasks... because it needs photoshop installed locally... Photoshop GUI will be initialized.. this process is quite slow. so it doesn't really fit for a busy workflow.
Related
I developed a kind of job application website and I only now realized that by allowing the upload of PDF files I'm at risk of receiving PDF documents containing encrypted data, active content (e.g. JavaScript, PostScript), and external references.
What could I use to sanitize or re-build the content of every PDF files uploaded by users?
I want that the companies that will later review the uploaded resumes are able to open the resumes from their browsers without putting them at risk..
The simplest method to flatten or sanitise a PDF that can be done using using GhostScript in safer mode requires just one pass:-
For a Windows user it will be as "simple" as using new 9.55 command
"c:\path to gs9.55\bin\GSwin64c.exe" -sDEVICE=pdfwrite -dNEWPDF -o "Output.pdf" "Input.pdf"
for others replace gs9.55\bin\GSwin64c with version 9.55 GS command
It is not a fast method e.g. around 40ppm is not uncommon, thus 4 pages is about 6 seconds to be reprinted, however, a 400 page document could take 10 minutes.
Advantages the file size is often smaller once any redundant content is removed. Images and font reconstruction may save storage, e.g. a 100 MB file may be reduced to 30 MB but that is a general bonus, not an aim.
JavaScript actions are usually discarded, However links such as bookmarks are usually retained, so be cautious as the result can still have rogue hyperlinks.
The next best suggestion is two passes via PostScript as discussed here https://security.stackexchange.com/questions/103323/effectiveness-of-flattening-a-pdf-to-remove-malware
GS[win64c] -sDEVICE=ps2write -o "%temp%\temp.ps" "Input.pdf"
GS[win64c] -sDEVICE=pdfwrite -o "Output.pdf" "%temp%\temp.ps"
But there is no proof that its any different or more effective than the one line approach.
Finally the strictest method of all, is burst the pdf into image only pages then stitch the images back into a single pdf and concurrently run OCR to reconstruct a searchable PDF (drops bookmarks). That can also be done using Ghostscript enabled with Tesseract.
Note:- visible external hyperlinks may then still be reactivated due to the pdf readers native ability.
Currently my group runs a meteor app with hundreds of thousands of images that are all very large full size. We should have done this a long time ago but we are in need of a way to optimize them to help loading times. I’m looking for a solution to be able to save the image in several sizes when the user uploads it from our app (ex: full-size, medium, thumbnail) and also auto rotate and allow user to rotate. We use Amazon S3 to host all of our images. We also need a way to convert all of the existing images into these size formats from server side.
I tried implementing something a while back and it was unsuccessful. I set up imagemagick on our server but had trouble getting this to work in production because the image was being saved temporarily on server memory to process but this was causing crashes because of limited amount of memory. I have little experience with this kind of thing.
My second thought was to use HTML canvas to resize the images. This would work, I think, for newly uploaded images. But I am still searching for a way to process the existing images as well.
I have considered:
Maybe AWS has a built in way to process them. I wouldn’t mind doing it that way.
Some sort of Meteor/node package that can help with this.
Setting up another server just to process images.
Using some third party image processing.
If someone can give me some advice so I can get the ball rolling that would be so helpful!
libvips can resize images without using memory or disk --- pixels are streamed through the system in small chunks, with decode and recode happening at the same time.
For example, with a 10k x 10k pixel JPG image, I see:
$ vipsheader wtc.jpg
wtc.jpg: 9372x9372 uchar, 3 bands, srgb, jpegload
$ /usr/bin/time -f %M:%e vipsthumbnail wtc.jpg -s 5000x5000 -o x.jpg
98720:0.65
This is a 4-core, 8-thread i7. It's using 98MB of memory and takes 0.65s of real time. There's a chapter in the docs introducing vipsthumbnail.
For comparison, with ImageMagick 6 I see:
$ /usr/bin/time -f %M:%e convert wtc.jpg -resize 5000x5000 x.jpg
1263232:2.02
1.3GB of memory and it takes 2s of real time -- about 10x more memory and 3x slower.
Because vipsthumbnail uses so little memory, you can combine it with GNU parallel without needing a server with many, many GB of memory. On this i7, I can usefully run four at once and get a roughly 4x speedup, so perhaps 12x faster than ImageMagick overall.
sharp is a popular node binding for libvips, which might be more convenient. There are bindings for Python, Ruby, PHP, Go, Lua, etc. etc. as well.
(disclaimer: I'm one of the libvips maintainers, so I'm not very neutral)
I see two ways on two completely different budgets:
This is not a way I recommend unless you only have 1-2 GBs (https://transloadit.com/demos/file-importing/resize-all-images-in-an-s3-bucket/)
Link your S3 to a Cloudinary service and do the transformations with Cloudinary (you will not like it ($$) for the number of images you have).
In AWS, I hope you use Cloudfront to serve your assets. Regardless of the transformation technology you will mainly do 2 things:
Create 1 Lambda function for transformation of all new created assets in S3. What I do is to "monitor" an S3 bucket and all new things coming in trigger my Lambda function and I create the assets in 2 other folders and I end up with: full res, half res and thumb res. In Meteor you then link every size to what you need. The most typical case is when you have a user profile image that you need to see it as full header, listing, or small thumb in a chat.
Create 1 Lambda Edge (slightly more $$ I believe) and attach to your Cloudfront edge to respond to all calls. If storage cost is not too high for you for the present volume stored, you can transform your images as they are being requested and replace the old larger images, rather than run it in a bulk as a 1 time process.
Instead of Lambda Edge, you could probably set up a EC2 machine with Node and run a function to loop through all your S3 assets and do the transformation.
Anyway, I feel that what you want to do it is all AWS, not related to your Meteor. One more thing to do: optimize images before they are being uploaded. If you use React with Meteor I could provide you with the necessary components, otherwise I can give you the components and you write the Blaze view layer or anything else you may use.
I have the Lambda transformation functions in production based on ImageMagic in case you are interested to go this way. I am also planning to "upgrade" this function to use Sharp (like in the example) but for the time being it is doing great in production, will switch when I get some time.
Check this example:
Download the image from S3, transform, and upload to a different S3 bucket or folder.
const dstKeyResizedHalf = `p-half/` + imageName
s3.getObject({
Bucket: srcBucket,
Key: srcKey
}).promise()
.then(data => Sharp(data.Body)
.jpeg({
chromaSubsampling: '4:4:4',
progressive: true
})
.resize(WEB_WIDTH_MAX)
.toFormat('jpg')
.toBuffer()
)
.then(buffer => s3.putObject({
Body: buffer,
Bucket: dstBucket,
ContentType: 'image/jpg',
Key: dstKeyResizedHalf,
CacheControl: 'max-age=864000'
}).promise())
.catch(err => callback(err))
}
I use https://www.imagemagick.org/ to resize, crop, rotate my images. it works with meteor. This will be good start point to explore.
https://github.com/CollectionFS/Meteor-CollectionFS
I am creating a website in which each user has an "avatar". An avatar has different accessories like hats, facial expressions, etc. I have made this previously on a php website but I am using react to create this new website. I am loading in each users avatar and its item links from firestore. I do not want to
use absolute positioning or css, I want the avatar to be one image.
Example of what I am trying to achieve:
I found this library: https://github.com/lukechilds/merge-images which seems to be exactly what I need but I cannot load in external images or I get this error:
Any solutions to this error or suggestions to an alternative would be greatly appreciated.
My code:
render() {
mergeImages([
'http://example.com/images/Avatar.png',
'http://example.com/images/Hat.png',
])
.then((b64) => {
document.querySelector('img.abc').src = b64;
})
.catch(error => console.log(error))
return (
...
<img class="abc" src='' width={100} height={200} alt="avatar"/>
...
); }
The merge-images package has some quirks. One of those quirks is that it expects individual images to either be served from your local server (example: http://localhost:3000/images/head.png, http://localhost:3000/images/eyes.png, and http://localhost:3000/images/mouth.png) or that those individual images be imported into a single file.
Working example: https://github.com/mattcarlotta/merge-images-example (this example includes the first three options explained below with the fourth option utilizing the end result of using a third party CDN)
To run the example, clone the repo:
git clone https://github.com/mattcarlotta/merge-images-example
Change directory:
cd merge-images-example
Then install dependencies:
yarn install
Then run the development server:
yarn dev
Option 1:
The simplest implementation would be to import them into a AvatarFromFiles component. However, as written, it isn't reusable and isn't suitable for dynamically selected avatars.
Option 2:
You may want to serve them from the local server like the AvatarFromLocalServer component with a Webpack Dev Config. Then you would retrieve stored strings from an API and pass them down into from state into the component. Once again, this still requires the images to be present in the images folder, but more importantly, it isn't ideal for a production environment because the images folder must be placed outside of the src folder to be served. This could also lead to security issues. Therefore, I don't recommend this option at all.
Option 3:
Same as Option 1, but lazy loaded like the AvatarFromLazyFiles component and therefore flexible. You can load images by their name; however, it still requires that all of the images be present upon runtime and during production compilation. In other words, what you have on hand is what you get.
Option 4:
So... the ideal option would be to build an image microservice or use a CDN that handles all things images (uploading, manipulating/merging, and serving images). The client would only be able to select/upload new images to this microservice/CDN, while the microservice/CDN handles everything else. This may require a bit more work but offers the most flexibility, super easy to implement, and best performance -- as it offloads all the work from the client to the dedicated service.
In conclusion, unless you plan on having a set amount of images, use option 3, otherwise option 4.
Problem
This is a CORS issue. The images are coming from a different origin that's not your server.
If you look at the source of the library, you'll notice it's using a <canvas> under the hood to merge the images, and then getting the resulting data. Canvas cannot work with images loaded from another domain. There's good reasoning behind this. In essence, loading an image into a canvas is a way to fetch data, and since you can retrieve the data from the canvas as base64, a malicious one could steal information by first loading it into a <canvas> and then pulling it out.
You can read about it directly from the spec for the <canvas> element.
Solution
You need to serve the images either from the same origin (essentially, the same domain) or include Access-Control-Allow-Origin: ... on the HTTP headers that serve the images. There's ways to do this in firebase storage, or other server solutions you might use.
I try to host an HTML file on an esp8266 access point. I can properly show an .html file. Unfortunately, when accessing the html page, my browser cannot display javascript content. Strangely, when I work locally on my machine - it works perfectly fine. When I access the page on the esp8266 I receive the error
"Not found: dygraph.min.js."
Obviously, the browser does not find the javascript source. I wonder why. I have tried out several ways of naming and referencing, but I was not lucky until now.
I upload the files with the ESP8266 Sketch Data Upload tool to the SPIFFS. In the html file I reference the js as <script type="text/javascript"
src="dygraph.min.js"></script>.
Did anybody experience anything like this before? The whole code can be found here:
https://github.com/JohnnyMoonlight/esp8266-AccessPoint-Logger-OfflineVisualisation
I am looking forward for your input!
Thanks and best!
Take a read through your code, and imagine the requests that will be made of your web server.
Your code is written to handle requests for two URLs: / and /temp.csv - that's it.
When /temp.csv is accessed, you serve the contents of index.html. When the browser interprets that file it will try to load /dygraph.min.js from your ESP. You don't have a handler for that file. So the load fails.
You need to add a handler for it and then serve the file. So you'll need to add a line like:
server.on("/dygraph.min.js", handleJS);
and define function void handleJS() that does what handleFile() does.
You'll need to do the same thing for the /dygraph.css; you don't have a handler for it either.
I would do it this way:
void handleHTML() {
handleFile("index.html");
}
void handleJS() {
handleFile("dygraph.min.js");
}
void handleCSS() {
handleFile("dygraph.css");
}
void handleFile(char *filename) {
File f = SPIFFS.open(filename, "r");
// the rest of your handleFile() code here
}
and in your setup():
server.on("/", handleRoot);
server.on("/temp.csv", handleHTML);
server.on("/dygraph.css", handleCSS);
server.on("/dygraph.min.js", handleJS);
Separately:
Your URL to file mappings are messed up. The code I shared above is consistent with what you have now, but normally you'd want / to serve index.html; you have it serving a fragment of HTML.
Normally /temp.csv would serve a comma-separated value file. I see you have one, in the repo and you have code to add data to it; you're just not serving it. Right now you have that serving index.html. Once you start successfully loading the Javascript you'll have problems with that.
You'll need to sort those out to get this working right.
Also, in loop() you should move server.handleClient(); to be the first thing in the loop. The way you have it written you're only checking to see if there's a web request if it's time to take another temperature reading. You should always check to see if there's a web request, otherwise you're unnecessarily slowing down web service.
One last thing, completely separate from the web server code, and I wouldn't worry about this till you get the rest of your code working: your code is writing to SPIFFS roughly every 5 seconds. SPIFFS is stored in flash memory on the ESP8266. ESP8266 boards use cheap flash memory that doesn't last a long time - it wears out after maybe 10,000 to 100,000 write cycles (this is a little complicated; it's broken into "pages" and the individual cells in the pages wear out, but you have to write the entire page at the same time).
It's hard to say for sure what its lifetime will be; it depends on the specific ESP8266 boards and flash chips involved. 10,000 write cycles means the flash memory on your board might start failing after 50,000 seconds - 100,0000 write cycles would give you about 500,000 writes -- if you keep writing to the same spot. It depends on how often the same place in flash is getting written to. If that's a problem for you, you might want to increase the delay between writes or do something else with your data.
You might not run into this because you're appending to a file - you'll still rewrite the same blocks of flash memory many times, but not 10,000 times - unless you often remove the CSV file and start over. So this might be a problem for you long term or might not.
You can read more about these problems at https://design.goeszen.com/mitigating-flash-wear-on-the-esp8266-or-any-other-microcontroller-with-flash.html
Good luck!
again. I'm making a PL/SQL generated HTML5 web page. It's running a Oracle 10g XE server. Okay, now when the setup is clear, my problem - I need to include a Java Script file in the page. Simply
HTP.P('<script type="text/javascript" src="js/ScriptFileName.js"></script>');
Doesn't work of course. So i created a folder object and granted read,write to PUBLIC. Then changed the string to match the newly created object, instead of path. Still doesn't work. I know, i can write
HTP.P(<script type="text/javascript"> MY JAVA SCRIPT HERE</script>);
And i've done so with other scripts(Even had to write CSS this way). But this time this will not work. Reason being - the JavaScript i'm trying to run was normalized(or rather unnormalized), so it's written all in one line. And there is a lot of it too. I tried to reverse it to normal, but faild many a time.
So, I went online and searched for a solution. Found one. It seem's that this include should go not to the page, but to server config. Makes sense, since PL/SQL is server sided. But when i went looking for the usual httpd.conf, it's nowhere to be found in Database directory.So i went online again, result - NOT A WORD OF WHERE THE HELL ARE HTTP SERVER CONFIGS IN 10gXE IN ANY ORACLE MANUALS. Searched some forums - exactly 1 person asked where httpd.conf in XE is, and didn't get an answer. Please, help. I'm desperate.
P.S. I don't use APEX. I don't get that mumbo-jumbo. So i write in Notepad and run the scripts in SQL line.
Firstly, XE has its own built in HTTP server called the 'Embedded PL/SQL Gateway' or EPG. But you don't HAVE to use that. You can use an Oracle HTTP Server with the mod_plsql plugin. Or you can use the Apex listener.
The question is on what server is "ScriptFileName.js" ?
Is it a flat file on the database server ? If so, you'll need to use the Oracle HTTP Server (or Apache or similar) to serve it. The database is pretty much unaware of files on its server and the EPG can't deliver them. [At least not in any practical sense, you could do weird things with chicken entrails and UTL_FILE, but you don't want to go there.]
Is it a file stored in the database ? That sounds exotic, but it is pretty much how all the CSS, images etc are served up through the EPG. The best explanation on how to get files in and out of there is by Dietmar
Is it a file stored on a separate machine ? Often the best answer. The "src=" directive will be read by the end users browser. That will do an HTTP get to the URL. It doesn't have to be a URL on the same domain/host as the rest of the page.