As there are things called 'callback hell'. It was the only way I can get a file from a server to my vps pc, and upload it. The process was simple:
Download a .json file from the ftp server
Edit the .json file on the pc
Upload the .json file and delete the pc's copy.
However my problem was this: Although it downloads once, it returns the upload based on how many times I command it during 1 session (command #1, does it once, command#2, does it twice, etc).
I tried to run it as imperative, but gets nullified. Had to resort to callback hell to run the code almost properly. The trigger works to initialize the command, but the command and session goof'd.
(( //declaring my variables as parameters
ftp=new (require('ftp'))(),
fs=require('fs'),
serverFolder='./Path/Of/Server/',
localFolder='./Path/Of/Local/',
file='some.json',
{log}=console
)=>{
//run server if its ready
ftp.on('ready',()=>{
//collect a list of files from the server folder
ftp.list(serverFolder+file,(errList,list)=>
errList|| typeof list === 'object' &&
list.forEach($file=>
//if the individual file matches, resume to download the file
$file.name===file&&(
ftp.get(serverFolder+file,(errGet,stream)=>
errGet||(
log('files matched! cdarry onto the operation...'),
stream.pipe(fs.createReadStream(localFolder+file)),
stream.once('close',()=>{
//check if the file has a proper size
fs.stat(localFolder+file,(errStat,stat)=>
errStat || stat.size === 0
//will destroy server connection if bytes = 0
?(ftp.destroy(),log('the file has no value'))
//uploads if the file has a size, edits, and ships
:(editThisFile(),
ftp.put(
fs.createReadStream(localFolder+file),
serverFolder+file,err=>err||(
ftp.end(),log('process is complete!')
))
//editThisFile() is a place-holder editor
//edits by path, and object
)
})
)
)
)
)
);
});
ftp.connect({
host:'localHost',
password:'1Forrest1!',
port:'21',
keepalive:0,
debug: console.log.bind(console)
});
})()
The main problem is: it'll return a copy of the command over and over as 'carry over' for some reason.
Edit: although the merits of "programming style" is different than common meta. It all leads to the same issue of callback hell. Any recommendations are needed.
For readability, I had help editing my code to ease difficulty. Better Readability version
The ftp modules API leads to the callback hell. It also hasn't been maintained for a while and is buggy. Try a module with promises like basic-ftp.
With promises the code flow becomes much easier to reason with and errors don't require specific handling, unless you want to.
const ftp = require('basic-ftp')
const fsp = require('fs').promises
async function updateFile(localFile, serverFile){
const client = new ftp.Client()
await client.access({
host: 'localHost',
password: '1Forrest1!',
})
await client.downloadTo(localFile, serverFile)
const stat = await fsp.stat(localFile)
if (stat.size === 0) throw new Error('File has no size')
await editThisFile(localFile)
await client.uploadFrom(localFile, serverFile)
}
const serverFolder = './Path/Of/Server'
const localFolder = './Path/Of/Local'
const file = 'some.json'
updateFile(localFolder + file, serverFolder + file).catch(console.error)
Related
I'm a new developer and this is my first Stack Overflow post. I've tried to stick to the format as best as possible. It's a difficult issue for me to explain, so please let me know if there's any problems with this post!
Problem
I'm working on a vscode extension specifically built for Next.js applications and running into issues on an event listener for the onDidChangeText() method. I'm looking to capture data from a JSON file that will always be located in the root of the project (this is automatically generated/updated on each refresh of the test node server for the Next.js app).
Expected Results
The extension is able to look for updates on the file using onDidChangeText(). However, the issue I'm facing is on the initial run of the application. In order for the extension to start listening for changes to the JSON file, the user has to be in the JSON file. It's supposed to work no matter what file the user has opened in vscode. After the user visits the JSON file while the extension is on, it begins to work from every file in the Next.js project folder.
Reproducing this issue is difficult because it requires an extension, npm package, and a next.js demo app, but the general steps are below. If needed, I can provide code for the rest.
1. Start debug session
2. Open Next.js application
3. Run application in node dev
4. Do not open the root JSON file
What I've Tried
Console logs show we are not entering the onDidTextDocumentChange() block until the user opens the root JSON file.
File path to the root folder is correctly generated at all times, and prior to the promise being reached.
Is this potentially an async issue? Or is the method somehow dependent on the Active Window of the user to start looking for changes to that document?
Since the file is both created and updated automatically, we've tested for both, and neither are working until the user opens the root JSON file in their vscode.
Relevant code snippet (this will not work alone but I can provide the rest of the code if necessary. ).
export async function activate(context: vscode.ExtensionContext) {
console.log('Congratulations, your extension "Next Step" is now active!');
setupExtension();
const output = vscode.window.createOutputChannel('METRICS');
// this is getting the application's root folder filepath string from its uri
if (!vscode.workspace.workspaceFolders) {
return;
}
const rootFolderPath = vscode.workspace.workspaceFolders[0].uri.path;
// const vscode.workspace.workspaceFolders: readonly vscode.WorkspaceFolder[] | undefined;
// this gives us the fileName - we join the root folder URI with the file we are looking for, which is metrics.json
const fileName = path.join(rootFolderPath, '/metrics.json');
const generateMetrics = vscode.commands.registerCommand(
'extension.generateMetrics',
async () => {
console.log('Succesfully entered registerCommand');
toggle = true;
vscode.workspace.onDidChangeTextDocument(async (e) => {
if (toggle) {
console.log('Succesfully entered onDidChangeTextDocument');
if (e.document.uri.path === fileName) {
// name the command to be called on any file in the application
// this parses our fileName to an URI - we need to do this for when we run openTextDocument below
const fileUri = vscode.Uri.parse(fileName);
// open the file at the Uri path and get the text
const metricData = await vscode.workspace
.openTextDocument(fileUri)
.then((document) => {
return document.getText();
});
}
}
});
});
}
Solved this by adding an "openTextDocument" call inside the "registerCommand" block outside of the "onDidChangeTextDocument" function. This made the extension aware of the 'metrics.json' file without it being open in the user's IDE.
I am launching a cloud function in order to replicate one register I have in Firestore. One of the fields is an image, and the function first tries to copy the image and then duplicate the register.
This is the code:
export async function copyContentFunction(data: any, context: any): Promise<String> {
if (!context.auth || !context.auth.token.isAdmin) {
throw new functions.https.HttpsError('unauthenticated', 'Auth error.');
}
const id = data.id;
const originalImage = data.originalImage;
const copy = data.copy;
if (id === null || originalImage === null || copy === null) {
throw new functions.https.HttpsError('invalid-argument', 'Missing mandatory parameters.');
}
console.log(`id: ${id}, original image: ${originalImage}`);
try {
// Copy the image
await admin.storage().bucket('content').file(originalImage).copy(
admin.storage().bucket('content').file(id)
);
// Create new content
const ref = admin.firestore().collection('content').doc(id);
await ref.set(copy);
return 'ok';
} catch {
throw new functions.https.HttpsError('internal', 'Internal error.');
}
}
I have tried multiple combinations but this code always fail. For some reason the process of copying the image is failing, I am doing anything wrong?
Thanks.
Using the copy() method in a Cloud Function should work without problem. You don't share any detail about the error you get (I recommend to use catch(error) instead of just catch) but I can see two potential problems with your code:
The file corresponding to originalImage does not exist;
The content bucket does not exists in your Cloud Storage instance.
The second problem usually comes from the common mistake of mixing up the concepts of buckets and folders (or directories) in Cloud Storage.
Actually Google Cloud Storage does not have genuine "folders". In the Cloud Storage console, the files in your bucket are presented in a hierarchical tree of folders (just like the file system on your local hard disk) but this is just a way of presenting the files: there aren't genuine folders/directories in a bucket. The Cloud Storage console just uses the different parts of the file paths to "simulate" a folder structure, by using the "/" delimiter character.
This doc on Cloud Storage and gsutil explains and illustrates very well this "illusion of a hierarchical file tree".
So, if you want to copy a file from your default bucket to a content "folder", do as follows:
await admin.storage().bucket().file(`content/${originalImage}`).copy(
admin.storage().bucket().file(`content/${id}`)
);
I'm trying to use Cloud Code to check whether a user-submitted image is in a supported file type and not too big.
I know I need to do this verification server-side and I think I should do it with Cloud Code using beforeSave – the doc even has a specific example about data validation, but it doesn't explain how to handle files and I couldn't figure it out.
I've tried the documented method for saving files, ie.
file = fileUploadControl.files[0];
var parseFile = new Parse.File(name, file);
currentUser.set("picture", parseFile);
currentUser.save();
and in the Cloud Code,
Parse.Cloud.beforeSave(Parse.User, (request, response) => { // code here });
But 1. this still actually saves the file on my server, right? I want to check the file size first to avoid saving too many big files...
And 2. Even then, I don't know what to do in the beforeSave callback. It seems I can only access the URL of the saved image (proof that it has been uploaded), and it seems very counter-intuitive to have to do another https request to check the file size and type before deciding whether to proceed with attaching the file to the User object.
(I'm currently using remote-file-size and file-type to check the size and type of the uploaded file, but no success here either).
I also tried calling a Cloud function, but it feels like I'm not doing the right thing, and besides I'm running into the same issues.
I can call a Cloud function and pass a saved ParseFile as a parameter, and then I know how to save it to the User object from the Cloud Code using the masterkey, but as above it still involves uploading the file to the server and then re-fetching it using its URL.
Am I missing anything here?
Is there no way to do something like a beforeSave on Parse.File, and then stop the file from being saved if it doesn't meet certain criteria?
Cheers.
If you have to do something with files, parse lets you overwrite the file adapter to handle file operations.
You can indicate the file adapter to use in your ParseServer instatiation:
var FSStoreAdapter = require('./file_adapter');
var api = new ParseServer({
databaseURI: databaseUri ,
cloud: process.env.CLOUD_CODE_MAIN || __dirname + '/cloud/main.js',
appId: process.env.APP_ID,
filesAdapter: fs_store_adapter, // YOUR FILE ADAPTER
masterKey: process.env.MASTER_KEY, //Add your master key here. Keep it secret!
serverURL: "https://yourUrl", // Don't forget to change to https if needed
publicServerURL: "https://yourUrl",
liveQuery: {
classNames: ["Posts", "Comments"] // List of classes to support for query subscriptions
}
maxUploadSize: "500mb" //you will now have 500mb limit :)
});
That said, you can also specify a maxUploadSize in your instatiation as you can see in the last line.
you have to use save in background
file = ParseFile("filename", file)
file?.saveInBackground({ e ->
if (e == null) {
} else {
Toast.makeText(applicationContext, "Error: $e", Toast.LENGTH_SHORT).show()
e.printStackTrace()
Log.d("DEBUG", "file " + e.code)
}
}, { percentDone ->
Log.d("DEBUG", "file:" + percentDone!!)
})
I'm trying to use the File and Directory Entries API to create a file uploader tool that will allow me to drop an arbitrary combination of files and directories into a browser window, to be read and uploaded.
(I'm fully aware that similar functionality can be achieved by using an file input element with webkitdirectory enabled, but I'm testing a use case where the user isn't forced to put everything into a single folder)
Using the Drag and Drop API, I've managed to read the DataTransfer items and convert them to FileSystemEntry objects using DataTransferItem.webkitGetAsEntry.
From there, I am able to tell that if the entry is a FileSystemFileEntry or a FileSystemDirectoryEntry. My plan of course if to recursively walk the directory structure, if any, which I should be able to do using the FileSystemDirectoryReader method readEntries, like this:
handleDrop(event) {
event.preventDefault();
event.stopPropagation();
//assuming I dropped only one directory
const directory = event.dataTransfer.items[0];
const directoryEntry = directory.webkitGetAsEntry();
const directoryReader = directoryEntry.createReader();
directoryReader.readEntries(function(entires){
// callback: the "entries" param is an Array
// containing the directory entries
});
}
However, I'm running into the following issue: in Chrome, the readEntries method only returns 100 entries. Apparently, this is the expected behavior as the way to obtain subsequent files from the directory is to call readEntries again. However, I'm finding this impossible to do. A subsequent call to the method throws the error:
DOMException: An operation that depends on state cached in an interface object was made but the state had changed since it was read from disk.
Does anyone know a way around this? Is this API hopelessly broken for directories of 100+ files in Chrome? Is this API deprecated? (not that it was ever "precated"). In Firefox, readEntries returns the whole directory content at once, which apparently against the spec, but it is usable.
Please advice.
Of course, as soon as I had posted this question the answer hit me. What I was trying to do was akin to the following:
handleDrop(event) {
event.preventDefault();
event.stopPropagation();
//assuming I dropped only one directory
const directory = event.dataTransfer.items[0];
const directoryEntry = directory.webkitGetAsEntry();
const directoryReader = directoryEntry.createReader();
directoryReader.readEntries(function(entries){
// callback: the "entries" param is an Array
// containing the directory entries
}, );
directoryReader.readEntries(function(entries){
//call entries a second time
});
}
The problem with this is that readEntries is asynchronous, so I'm trying to call it while it's "busy" reading the first batch (I'm sure lower-level programmers will have a better term for that). A better way of achieving what I was trying to do:
handleDrop(event) {
event.preventDefault();
event.stopPropagation();
//assuming I dropped only one directory
const directory = event.dataTransfer.items[0];
const directoryEntry = directory.webkitGetAsEntry();
const directoryReader = directoryEntry.createReader();
function read(){
directoryReader.readEntries(function(entries){
if(entries.length > 0) {
//do something with the entries
read(); //read the next batch
} else {
//do whatever needs to be done after
//all files are read
}
});
}
read();
}
This way we ensure the FileSystemDirectoryReader is done with one batch before starting the next one.
I am trying to download a file, but it keeps getting interrupted, and I have no idea why. I can not find any information on how to debug the reason it got interrupted either.
Here is where I am saving the file:
C:\Users\rnaddy\AppData\Roaming\Tachyon\games\murware\super-chain-reaction\web.zip
window.webContents.session.on('will-download', (event, item, webContents) => {
let path = url.parse(item.getURL()).pathname;
let dev = path.split('/')[3] || null;
let game = path.split('/')[4] || null;
if (!dev && !game) {
item.cancel();
} else {
item.setSavePath(Settings.fileDownloadLocation(dev, game, 'web'));
item.on('updated', (event, state) => {
let progress = 0;
if (state == 'interrupted') {
console.log('Download is interrupted but can be resumed');
} else if (state == 'progressing') {
progress = item.getReceivedBytes() / item.getTotalBytes();
if (item.isPaused()) {
console.log('Download is paused');
} else {
console.log(`Received bytes: ${item.getReceivedBytes()}; Progress: ${progress.toFixed(2)}%`);
}
}
});
}
});
Here is my listener that will trigger the above:
ipcMain.on(name, (evt) => {
window.webContents.downloadURL('http://api.gamesmart.com/v2/download/murware/super-chain-reaction');
});
Here is the output that I am getting in my console:
Received bytes: 0; Progress: 0.00%
Received bytes: 233183; Progress: 0.02%
Download is interrupted but can be resumed
I have a host file setup:
127.0.0.1 api.gamesmart.com
When I try to access the path http://api.gamesmart.com/v2/download/murware/super-chain-reaction in chrome, the file downloads just fine into my Downloads folder. So, what is causing this?
If you set the specific directory for downloading, you should use full file path with the file name in item.setSavePath() method. The best way to do it, fetching the file name from downloaditem object (item in your case) itself. You can use item.getFilename() to get the name of the current download item easily. here is the doc
And also there is a good way to get frequently used public system directory paths in electron. That is, using app.getPath(name) method. name would be the pre-defined String by electron for several directories. here is the doc
So, your complete setSavePath function would be,app.getPath("downloads") + "/" + item.getFilename()
In your case, if you are OK with your file path extraction method, only thing you are missing is filename at the end of the download path.
Of course you can use any other string as the file name if you wish. But remember to put correct extension though. :)
My solution was to use the correct Windows path separator (\), .e.g. 'directory\\file.zip'. Generally, Node.js uses / for any platform, but this seems to be sensitive about the path separator.