I’m looking to allow a user to upload a javascript file containing a thread and then to spawn that thread, but I’m facing some issues with the device paths.
First I use the DocumentPicker to pick the file:
const pickerResult = await DocumentPicker.pickSingle({
type: [types.allFiles],
copyTo: 'documentDirectory'
});
Then I use the RNFS.readDir to get the file and its path comes back as /data/user/0/playground.com/files/my-thread.js.
Now, the react-native-threads expects a relative path like such new Thread('./worker.thread.js’);. If I try to pass the path received by RNFS I receive the following no such file error:
Error: ENOENT: no such file or directory, open '/data/user/0/playground.com/files/my-thread.js'.
I feel like this may be possible to avoid by using a Blob, but perhaps there is an easier way?
Related
So I am trying to make a music app where people can upload music there. First, the client takes file and changes it to object url like this:
const track_src = URL.createObjectURL(track);
data.track_src = track_src;
await Req.post("/api/u/music/upload", data)
After that, the server receives the data and object url and uploads it to Firebase storage:
//track_src is the object url
await st.bucket(sid).upload(track_src, {
gzip: true,
metadata: {
cacheControl: 'public, max-age=31536000',
},
})
But I get error that says:
Error: ENOENT: no such file or directory, stat 'E:\Server\blob:http:\localhost:3000\91e53bb5-abf2-46b4-bd0c-268b242e93f3'
What you are trying to do is not possible. There are basically 2 methods of uploading files in Storage, you either have to :
Use the bucket().upload() method to upload, which accepts the path to your local file as the pathString parameter, so you need the actual file for this, not a url. This is the ideal option if you have the file stored locally and given the information you shared this might be the way to go for you. You can look at this answer for more information.
Use the bucket().file() to create an empty file in Storage and then use the file.createWriteStream() method to get a stream that can write to the file content. This can be a valid solution if you have the file in memory.
I would suggest you to take a look at this documentation for the methods offered for the bucket class and this documentation for the methods offered for the file class.
I'm attempting to use the "file-type" NPM module (which I have working on the server) client side to validate mime type prior to a file upload to an S3 bucket.
The readme for the module includes an example of using it in the browser:
const FileType = require('file-type/browser');
const url = 'https://upload.wikimedia.org/wikipedia/en/a/a9/Example.jpg';
(async () => {
const response = await fetch(url);
const fileType = await FileType.fromStream(response.body);
console.log(fileType);
//=> {ext: 'jpg', mime: 'image/jpeg'}
})();
That obviously won't work directly in the browser due to the "require", and if I link to the NPM file directly, I get:
Uncaught ReferenceError: require is not defined
So I've tried using Webpack (with which I'm not well versed at all, but have followed the official Webpack tutorial, and a few other tutorials) to create a "main.js" file and then accessed that via a script tag.
For example, running this through Webpack:
import * as mimeFromBuffer from 'file-type/browser.js';
export function mimeTyping(mimeValidationBytes){
(async () => {
const fileType = await mimeFromBuffer(mimeValidationBytes);
console.log(fileType);
//=> {ext: 'jpg', mime: 'image/jpeg'}
})();
return;
}
Results in this when I call the mimeTyping client side:
Uncaught ReferenceError: mimeTyping is not defined
I've tried Vinay's answer from this question in the browser:
import fileType from 'file-type';
const blob = file.slice(0, fileType.minimumBytes);
const reader = new FileReader();
reader.onloadend = function(e) {
if (e.target.readyState !== FileReader.DONE) {
return;
}
const bytes = new Uint8Array(e.target.result);
const { ext, mime } = fileType.fromBuffer(bytes);
// ext is the desired extension and mime is the mimetype
};
reader.readAsArrayBuffer(blob);
Which gets me this:
Uncaught SyntaxError: Cannot use import statement outside a module
I've researched that error a bit, with a common solution being to add "type":"module" to package.json. This did not help (error was unchanged).
I found a similar question in the module's Github repository, which suggests:
Although this module is primary designed for Node.js, using it in the
browser is possible as well. That is indeed where 'file-type/browser'
intended for. It provides the right dependencies and right functions
to the JavaScript module bundler. Some dependencies, which are
typically present in a Node.js environment, but are missing in a
browser environment, you may need to pass (polyfill) to your module
bundler. Although in some cases it may a bit tricky to configure it,
note that this is a pretty common task handled by a module bundler.
I'm struggling to understand what next steps that suggests.
Another comment at the bottom of that thread indicates the author had success with:
import { fromBuffer } from 'file-type/core';
...
const buffer = await uploadedFile.arrayBuffer();
const types = await fromBuffer(buffer);
I'm not sure how to implement that, and what other code I need (I'm guessing this gets passed to Webpack, and probably requires an export statement, and then an import on the client side?)
I've tried passing this into Webpack:
const fileType = require('file-type/browser');
module.exports = fileType;
But linking to the output file again gets:
Uncaught SyntaxError: Cannot use import statement outside a module
I think I understand conceptually what I need to do: pass the NPM module to Webpack, which in turn parses it and finds any dependencies, gets those dependencies, and creates a JavaScript file I can use client side. It seems I'm doing something in there wrong.
I've spent days trying to understand how to use NPM modules client side (still quite foggy in my mind) and trying all sorts of variants on the above code - would really appreciate some guidance (first time posting a question here - please go easy on me!).
Thank you!
Edit: I don't think this is a duplicate - I did review How to import JS library from node_modules, Meteor Npm-module client-side?, how to use node.js module system on the clientside, How to call a server-side NodeJS function using client-side JavaScript and Node Js application in client machine but no suggestion I tried in any of those seemed to help.
Finally got this working. In case anyone else is stuck on this, here's an explanation (apologies for the lack of brevity - probably this should be a blog post...).
To flesh out the use case a bit further, I'm using Uppy to allow users to upload files to an AWS S3 bucket. The way this works is that, when the user uploads a file, Uppy makes a call to my server where an AWS pre-signed URL is generated and passed back to the client. The client then uses that pre-signed URL to upload the file directly to the S3 bucket, bypassing the server, such that the file doesn't pass through the server at any point.
The problem I was attempting to solve was that files missing an extension ended up uploaded with the content / MIME type set as "application/octet", because it seems the browser, Uppy, and S3 all rely on the file extension to decide the file type (rather than parsing the so-called "magic bytes" of the file), and if the file extension is missing, AWS defaults to "application/octet". This causes issues when users attempt to open the file, as they are not handled correctly (i.e. a png file without an extension and with an "application/octet" content / MIME type opens a download dialog rather than being previewed, etc.). I also want to validate the MIME type / file type in cases even where the extension exists so that I can exclude certain types of files, and so the files get handled appropriately when they are later downloaded (where the MIME type will again be validated) if an incorrect file extension is used.
I use the "file-type" NPM module to determine the mimetype server side, and that's straight forward enough, but changing the file's content type / MIME type when generating the AWS pre-signed URL is not enough to fix the problem - it still gets uploaded as "application/octet". I wanted to use the same module client side so we get the exact same results on the client as on the server, and needed in any case to determine the MIME type and set it accordingly pre-upload but post-pre-signed URL. I had no idea how to do this (i.e. use "file-type" client side - the meat of my question).
I finally gave up on Webpack - nothing I tried worked. So I switched to Browserify, and the sample browser code at the "file-type" repository worked at once! So then I was left trying to figure out how to pass a function through Browserify to use in the client side code.
This proved impossible for me - I couldn't figure out how to pass the asynchronous IIFE through to my code. So instead, I moved my Uppy code into the code I pass to Browserify:
// Get the dependency, which we've added to node via "npm install file-type":
const FileType = require('file-type/browser');
// When the user adds a file for upload to Uppy...
uppy.on('file-added', (file) => {
// 1. Create a filereader:
const filereader = new FileReader();
filereader.onloadend = function(evt) {
// 4. Once the filereader has successfully finished reading the file...
if (evt.target.readyState === FileReader.DONE) {
// Get the unsigned 8 bit int8Array (ie Uint8Array) of the 600 bytes (this looks like '[119,80,78,71...]'):
const uint = new Uint8Array(evt.target.result);
// Determine the mime type using the "file-type" NPM package:
(async () => {
// Pass in our 600 bytes ("uint"):
const fileType = await FileType.fromBuffer(uint);
console.log(fileType); // outputs => {ext: 'jpg', mime: 'image/jpeg'}
// Do some validation here...
//
// Assign the results to the file for upload - we're done!:
file.extension = fileType.ext;
file.meta.type = fileType.mime;
file.type = fileType.mime;
})();
}
}
// 2. Grab the first 600 bytes of the file for mime type analysis server side - most mime
// types use the first few bytes, but some (looking at you, Microsoft...) start at the
// 513th byte; ISO CD files start at the 32,770th byte, so we'll ignore those rather than
// send that much data for each file - users of this system certainly aren't expected to
// upload ISO CD files! Also, some .zip files may start their identifying bytes at the
// 29,153nd byte - we ignore those too (mostly, .zip files start at the first, 31st, or 527th byte).
const blob = file.data.slice(0, 600);
// 3. Start reading those 600 bytes...continues above at 'filereader.onloadend':
filereader.readAsArrayBuffer(blob);
})
That all goes into a file I call "index.js" and then, having installed Browserify at the command line via "npm install -g browserify", I use this at the command line to create the file ("main.js") I link to in my client side code:
browserify index.js -o main.js
I want to know how to download the files to aws-s3 through heroku, and dosen't use the web page.
I'm using wget in Nodejs to download audio files, and wanting to re-encoding them by ffmpeg. Though I have to store the file first, but heroku doesn't provide the space to store.
Then I find the way that heroku can connect with aws-s3, however the sample code is the page for user to upload, and I'm just want to using codes.
This is the code that haven't connected with s3 yet.
const wget = spawn('wget',['-O', 'upload_audio/'+ sender_psid +'.aac', audioUrl]);
//spit stdout to screen
wget.stdout.on('data', function (data) { process.stdout.write(data.toString()); });
// //spit stderr to screen
wget.stderr.on('data', function (data) { process.stdout.write(data.toString()); });
wget.on('close', function (code) {
console.log(process.env.PATH);
const ffmpeg = spawn('ffmpeg',['-i', 'upload_audio/'+ sender_psid +'.aac', '-ar', '16000', 'upload_audio/'+ sender_psid +'.wav', '-y']);
...
});
And it comes the error:
upload_audio/1847432728691358.aac: No such file or directory
...
Error: ENOENT: no such file or directory, open 'upload_audio/1847432728691358.wav'
How can I solve the problem?
Could anyone give some suggest plz?
Maybe won't use s3 can get it?
I don't recommend using S3 until you've tried the NodeJS Static Files API. On the server, that looks like this:
app.use(express.static(path.join(__dirname, 'my_audio_files_folder')));
More on NodeJS's Static Files API available here.
AWS S3 seems to be adding unneeded complexity, but if you have your heart set on it, have a look at this thread.
I'm studying Node.js right now, using the "Beginning Node.js" textbook.
The example in the book does not execute properly in the command prompt. (I'm using Ubuntu 18.04 with Node v9.4.0, and the book is three years old, so this may be related?)
In the downloadable source code for the book, this is the code that is provided:
var fs = require('fs');
// Create readable stream
var readableStream = fs.createReadStream('./cool.txt');
// Pipe it to out stdout
readableStream.pipe(process.stdout);
The file, cool.txt, is in the parent directory.
--parentFolder
----jsFileToExecute.js
----cool.txt
When I run node jsFileToExecute.js in the command prompt, I get this response:
events.js:137
throw er; // Unhandled 'error' event
^
Error: ENOENT: no such file or directory, open './cool.txt'
As this source code is directly from the textbook publisher's website and it still doesn't run, I'm assuming there's something wrong with it?
Looking for solutions to figure out why this isn't working. I've looked at the documentation at nodejs.org, but didn't find any clues.
Thank you for any help!
To avoid paths issues, it's recommended to use path.join, like:
const path = require('path');
const coolPath = path.join(__dirname, 'cool.txt');
const readableStream = fs.createReadStream(coolPath);
With above example, you're creating a path to the file referring to the current directory of the script stored in global variable called __dirname.
if the file is in the same directory where currently you are running your project then you can simply type the name of the file instead of giving it the path like ./ or / like here:
var fs = require('fs');
// Create readable stream
var readableStream = fs.createReadStream('cool.txt');
// Pipe it to out stdout
readableStream.pipe(process.stdout);
It will work as will read the file from the same directory
I am trying to pipe an http audio stream from my nodejs server:
var streamPath = 'http://127.0.0.1:1485/mystream.mp3';
var stat = fs.statSync(streamPath);
response.writeHead(200, {'Content-Type': 'audio/mpeg','Content-Length': stat.size});
fs.createReadStream(streamPath).pipe(response);
The problem is that fs doesn't like the absolute path and I get the following error:
Error: ENOENT: no such file or directory, stat 'C:\myserver\http:\127.0.0.1:1485\mystream.mp3'
I can't find a way to use the absolute path. Is this even possible using fs?
'http://127.0.0.1:1485/mystream.mp3' is not an absolute path, it's a URL.
Absolute paths are something like /home/x/dir/file.txt on your filesystem.
If you want to get a stream from a URL then you need to use http or request. Or you need to use a path on your filesystem and not a URL if you want to get a file on your filesystem without using the network.
fs.createReadStream can only open local file in your filesystem.
For more details, see:
https://nodejs.org/api/fs.html
To get a file over a network, see:
https://www.npmjs.com/package/request
https://www.npmjs.com/package/request-promise
You are trying to access a remote resource through fs - that is wrong. fs is meant to be used for the Local Filesystem. If you want to access any remote resource you have to use http or https. However if I see things correctly you are trying to access your localhost, which should work.
Your Application is trying to access following file: C:\myserver\http:\127.0.0.1:1485\mystream.mp3 if you look closely that can't work. Your Path is mixed with your local path and a remote source (which is localhost actually). Try to fix your path, that should solve your problem. Keep in mind that fs will only work on your local system.
You should also think about fs.statSync this will block everything else until its finished.
Docs:
fs: https://nodejs.org/api/fs.html
http: https://nodejs.org/api/http.html
https: https://nodejs.org/api/https.html
Regards,
Megajin