Im using deep AI API to make an image a cartoon and it renders but now I need to save the image so it can be downloaded, I looked on stack overflow all over and cant find the answer, (they all use get method).
Also tyried let Blob=await response.blob();
it didnt work it stopped the render in fact!
(async function() {
let result=await deepai.callStandardApi("toonify", {
image: "<?php echo $url;?>"
});
let response=await deepai.renderResultIntoElement(result, document.getElementById('RES'));
let blob=response.blob();
//var file=new Blob(blob, {type:'image/jpeg'});
})();
saveAs(blob, "image/jpeg", "image.jpeg");
Its nearly there its trying to download to a file but it is saying not correct format upon opening. any ideas?
Getting error in console Uncaught TypeError: File constructor: Argument 1 can't be converted to a sequence.
Related
I am trying to create a File object in React Native to use that file (video) to upload it to YouTube Data v3 API. I made my app work without issue in plain Javascript and I am trying to make it work in React Native - the upload part works but I am having trouble getting the video file in React Native.
This is my working code in plain JavaScript using jQuery:
function getVideoFile() {
return $('#file').get(0).files[0];
}
I just get the video file using jQuery (returns a File object) and send it to the API and it uploads without issues.
Here is my React Native so far:
let file = await fetch('./assets/video.mp4');
let videoFile = await file.blob();
var blob = videoFile.slice(0, videoFile.size, "video/mp4")
var f = new File([blob], "video.mp4", {type: "video/mp4"});
The upload to YouTube in React Native works without any errors but in My Videos page on YouTube I get
Processing abandoned
The video could not be processed
The only difference that I am seeing in my JavaScript and React Native code is the File object size property value. The size of the File object in JavaScript (1570024) is much larger than the size in React Native (3539) - its the same video file. Could someone please explain why the file size difference because I am struggling to understand and what am I doing wrong or what would be the correct way to read a video file in React Native into a File object?
Update 1
I tested my React Native fetch code in plain javascript and it gets the good size of the file and the uploaded video is successfully processed by YouTube. In my React Native app this same function behaves differently by getting a smaller file size.
async function getVideoFile() {
let file = await fetch('video.mp4');
let videoFile = await file.blob();
return new File([videoFile], "video.mp4", {type: "video/mp4"});
}
I found the solution! In React Native first I need to use require to get the correct path.
var path = require("./assets/video.mp4");
let response = await fetch(path);
var blob = await response.blob();
var file = new File([blob], "video.mp4", {type: "video/mp4"});
For some strange reason fetch() always returns 200 ok status in my React Native app even when the url/file doesn't exist.
When I upload plain text to firebase storage, the result that ultimately gets uploaded is not the plain text that I submitted, but instead, a string of numbers that correspond to ASCII characters.
I followed the example from the docs almost exactly, here is my code:
const storageRef = firebase.storage().ref("example.txt");
const message = "This is my message.";
storageRef.putString(message);
However, when I check the file on the server, I find that it contains the following:
84,104,105,115,32,105,115,32,109,121,32,109,101,115,115,97,103,101,46
I have also tried this code:
const storageRef = firebase.storage().ref("example.txt");
const message = "This is my message.";
storageRef.putString(message, "raw", { contentType: "text/plain" });
But it doesn't make a difference.
I am running this from a react native expo managed workflow, using the firebase js sdk (I just ran expo install firebase).
Can anyone tell me why my data is getting converted to ASCII, how I can prevent that from happening, or how I can otherwise resolve this?
Pretty sure this is the result of a Firebase bug, but I found a workaround by creating a blob and using "put" instead of "putString"
E.g.
const obj = {hello: 'world'};
const blob = new Blob([JSON.stringify(obj)], {type : 'application/json'});
const storageRef = firebase.storage().ref("example.json");
storageRef.put(blob)
Still having a little trouble figuring out how to do this for images though...
I'm having trouble loading an audio file then putting it into readAsArrayBuffer. I don't want the user to choose a file, I just want it to load one locally like so:
let file = new File('../music/song.mp3');
let reader = new FileReader();
reader.onload = function(e) {
playSound(e.target.result);
}
reader.readAsArrayBuffer(file);
When I do the above, I get the following error:
Uncaught TypeError: Failed to construct 'File': 2 arguments required, but only 1 present.
So I try this:
let file = new File([''], '../music/song.mp3');
Which then produces this error:
Uncaught (in promise) DOMException: Unable to decode audio data
The size of what it loads is 0, which I believe is why it's producing this error.
Everything works fine when I let the user choose a file to load:
document.getElementById('open-file').onchange = function(evt) {
let file = evt.target.files[0];
Does anyone know how to fix this?
As Patrick Evans stated, it's not at all possible because of security risks. Such a shame as you can easily play audio with HTML5, but you can't load the audio files and read them as an array buffer.
I want to save 1 image in my local website.
I'll research all internet, I find almost is C# and Java code but I can't convert it to Javascrip.
Almost example using Point, IO library is not available in javascript.
I also search code in nodejs in Stackoverflow.
I've was test but it not working for me.
Present,
My code can Takescreenshot all webpage but I want it capture image with id.
Here is code:
driver.findElement(webdriver.By.xpath('//img[#id="c_pages_Image"]'))
.then(function(){
driver.takeScreenshot("c:\\selenium_local_map\\out1.png");
});
driver.takeScreenshot().then(function(data){
var base64Data = data.replace(/^data:image\/png;base64,/,"")
fs.writeFile("out.png", base64Data, 'base64', function(err) {
if(err) console.log(err);
});
});
let imageElement = await driver.wait(until.elementLocated(By.xpath('//img')), 5000) // returns image element
let screenshot = await imageElement.takeScreenshot(false) // takes screenshot of image element
fs.writeFileSync('\path\to\target\file', screenshot, 'base64')
I am playing with the FileSystem API in Google Chrome 16 but can't write more than once blob (without reopen the file for appending). Seems like the file is closed after the first write.
For example:
var blob = new WebKitBlobBuilder();
blob.append('one');
fileWriter.write(blob.getBlob('text/plain'));
var blob2 = new WebKitBlobBuilder();
blob2.append('two');
fileWriter.write(blob2.getBlob('text/plain'));
gives a _Uncaught Error: INVALID_STATE_ERR: DOM File Exception 7_
The W3 doc says about FileWriter: "This interface expands on the FileSaver interface to allow for multiple write actions, rather than just saving a single Blob."
According to the specification, you are not allowed to use a writer when it's busy writing:
write
If readyState is WRITING, throw a FileException with error code INVALID_STATE_ERR and terminate this overall series of steps.
Since the writer is asynchronous, you have to wait using a callback:
// write first blob
fileWriter.onwriteend = function() {
// write second blob
};