Fine-Uploader Replace File but keep UUID - javascript

Does Fine-Uploader have the concept of replacing an existing file, but keeping the same UUID?
My uploaders are restricted to one file only. When clicking upload file again (after a file has already been successfully uploaded) results in a new UUID created for the new file. I'd like to keep the same UUID since it may already be cross linked to other data points in our back end.

Reusing a UUID for multiple files defeats the entire purpose of a UUID. So this is not supported.

I had the exact same use case and did not find a direct solution. The easiest solution for me was to handle the onSubmit event and change the UUID to a value that you create in your back end.
var sameUuid = qq.getUniqueId();
var uploader = new qq.FineUploader({
callbacks: {
onSubmit: function (id, fileName) {
this.setUuid(id, sameUuid);
}
}
});
You could also generate a second UUID that does not change and send it along as a parameter or as a custom header.

Related

React, Redux, Filepond

My question... how do I upload files using FilePond but onclick, not automatically like it does out of the box? Also, since I need to take a number of other actions with theses images (like display them for review prior to upload) and since I need to add other data to the FormData that gets sent (plus dispatch actions to Redux).
Normally I would create a FormObject append the files and other values to it before POSTing it to some endpoint (with any custom headers needed). However when I inspected the FilePond instance it seems like the only thing I have access to is a blob... not the actual files. Is this accurate? Do I need to follow some special FilePond specific technique to get file upload to work?
FilePond's docs have a custom config value called "server" that appears to have access to an actual file in the more advanced examples so is this the way it must be done? I can't just grab the files (from somewhere that I do not currently see on the FilePond instance) and append them to an object for use in my normal "service"?
Any tips are appreciated. In a React app I want to upload a variable number of files onclick after appending other form data and setting headers (using Axios, ideally) and POST these files to an API.
Example from their docs uses a prop like:
server="/api"
I want something like (fake code):
server={submitImages} <-- this should only happen onclick of some button
where:
submitImages = (fieldName, file) => {
const formData = new FormData()
formData.append(fieldName, file, file.name)
formData.append('foo', this.props.foo)
const docuploadresult = this.props.uploadDocs(formData) <-- a service that lives elsewhere and actually does the POST
docuploadresult.then(result => {
// success
}, error => {
// error
})
}
and my problems are that I don't see why this needs to happen in some special config object like server, don't see how to make this happen onclick, don't see an actual file anywhere.
I may be overthinking this?
FilePond offers the server property so it can handle the uploads for your. But this is not required, you can use getFiles to easily request all file items (and File objects) in FilePond and upload them yourself.
Add your own submit button to the form and use submitImages below to submit the files.
submitImages = (fieldName) => {
const formData = new FormData();
this.filepondRef.getFiles()
.map(fileItem => fileItem.file)
.forEach(file => {
formData.append(fieldName, file, file.name);
});
// upload here
}
If you want to show image previews you can add the image preview plugin.
https://pqina.nl/filepond/docs/patterns/plugins/image-preview/

Why is the foreach loop NOT making a change to the file?

I am reviewing a nodeJS program someone wrote to merge objects from two files and write the data to a mongodb. I am struggling to wrap my head around how this is working - although I ran it and it works perfectly.
It lives here: https://github.com/muhammad-asad-26/Introduction-to-NodeJS-Module3-Lab
To start, there are two JSON files, each containing an array of 1,000 objects which were 'split apart' and are really meant to be combined records. The goal is to merge the 1st object of both files together, and then both 2nd objects ...both 1,000th objects in each file, and insert into a db.
Here are the excerpts that give you context:
const customerData = require('./data/m3-customer-data.json')
const customerAddresses = require('./data/m3-customer-address-data.json')
mongodb.MongoClient.connect(url, (error, client) => {
customerData.forEach((element, index) => {
element = Object.assign(element, customerAddresses[index])
//I removed some logic which decides how many records to push to the DB at once
var tasks = [] //this array of functions is for use with async, not relevant
tasks.push((callback) => {
db.collection('customers').insertMany(customerData.slice(index, recordsToCopy), (error, results) => {
callback(error)
});
});
})
})
As far as I can tell,
element = Object.assign(element, customerAddresses[index])
is modifying the current element during each iteration - IE the JSON object in the source file
to back this up,
db.collection('customers').insertMany(customerData.slice(index, recordsToCopy)
further seems to confirm that when writing the completed merged data to the database the author is reading out of that original customerData file - which makes sense only if the completed merged data is living there.
Since the source files are unchanged, the two things that are confusing me are, in order of importance:
1)Where does the merged data live before being written to the db? The customerData file is unchanged at the end of runtime.
2)What's it called when you access a JSON file using array syntax? I had no idea you could read files without the functionality of the fs module or similar. The author read files using only require('filename'). I would like to read more about that.
Thank you for your help!
Question 1:
The merged data lives in the customerData variable before it's sent to the database. It exists only in memory at the time insertMany is called, and is passed in as a parameter. There is no reason for anything on the file system to be overwritten -- in fact it would be inefficient to modify that .json file every time you called the database -- storing that information is the job of the database, not a file within your application. If you wanted to overwrite the file, it would be easy enough -- just add something like fs.writeFile('./data/m3-customer-data.json', JSON.stringify(customerData), 'utf8', console.log('overwritten')); after the insertMany. Be sure to include const fs = require('fs');. To make it clearer what is happening, try writing the value of customerData.length to the file instead.
Question 2:
Look at the docs on require() in Node. All it's doing is parsing the data in the JSON file.
There's no magic here. A static json file is parsed to an array using require and stored in memory as the customerData variable. Its values are manipulated and sent to another computer elsewhere where it can be stored. As the code was originally written, the only purpose that json file serves is to be read.

How can I save a drag and dropped image to the server?

First of all, background.
Everything is behind a corporate firewall, so there is no showing a live version or accessing nifty tools like nodeJS or many libraries.
I have HTML, php 5, javascript, and jquery1.9.
I have an web window with a bunch of data. I want to allow users to be able to drop a picture called a sleuth image into a box, show that box, and save that image in a special location on my server. The sleuth image is a dynamically generated graph on a different internal server that I have no privileges whatsoever on (another department). While it could be named anything, I want to save it with a specific name so it displays properly later when the web window for this data is reloaded.
I have this javascript function which allows them to drop an image and displays it. I just need it to save the image to a .png.
function drop_handler(event) {
// this is part of the ability to drag sleuth images onto the OOC
// Prevent default behavior (Prevent file from being opened)
event.preventDefault();
var items = event.dataTransfer.items;
for (var i = 0; i < items.length; i++) {
var item = items[i];
if (item.kind === 'string') {
item.getAsString(function(data) {
document.getElementById("sleuth").innerHTML = data;
});
}
}
}
I need to save the img src that show up in the variable "data" AS "sleuthImg.png"
Yes, I know I need to add validation. First, I need this part to work.
First, you will need an endpoint on the server that can accept files and store them. Assuming you have that part already, then:
Get the file from the dataTransfer object
https://developer.mozilla.org/en-US/docs/Web/API/DataTransfer/files
Then create a new FormData
https://developer.mozilla.org/en-US/docs/Web/API/FormData
var formData = new FormData();
formData.append('file', fileFromDataTransfer);
formData.append('mime_type', fileFromDataTransfer.type);
(where 'file' is the name of the post parameter that your server is expecting. 'mime_type' is another form data parameter included as an example)
Then, using the request-making library of your choosing, make a POST request with the form data.
post('your/upload/endpoint', formData);

Reading a file and storing it in an array in javascript

First of all, I'm programming in Javascript, but not for a website or anything like that. Just a .js file in a folder in my PC (later I pass that .js file to other people so they can use it).
Now, I wanted to read a txt file within the same folder as the script and store its content in a variable. I'd like to do something like this: Reading a file and storing it in an array, then splitting up the file everywhere there is a },
Then if a string (input by the user, I already have this covered) contains a substring from the array, it would call a function.
Can you please help me?
As we answered the first part of your question in the comments, here is my solution to the second part of your question.
You can add an event listener on the input and check the user input against the values in your array. I may have misunderstood what you exactly mean by "substring"
var myData = ["world","one","two", "blue"];
document.getElementById('theInput').addEventListener('input',checkInput);
function checkInput(){
var input = this.value;
if(myData.indexOf(input) > -1){
console.log("match!")
// call your function
}
}
<input id='theInput' type='text'/>
If not need to run in the broswer you can use node and use fs for read/write files.
Node
Node fs (File System):
If you need to run in the broswer you can use XMLHttpRequest and ajax.
Or use a input type=file and use FileReader

Mirth channelMap in source JavaScript

In my source connector, I'm using javascript for my database work due to my requirements and parameters.
The end result is storing the data.
ifxResults = ifxConn.executeCachedQuery(ifxQuery); //var is declared
I need to use these results in the destination transformer.
I have tried channelMap.put("results", ifxResults);.
I get the following error ReferenceError: "channelMap" is not defined.
I have also tried to use return ifxResults but I'm not sure how to access this in the destination transformer.
Do you want to send each row as a separate message through your channel? If so, sounds like you want to use the Database Reader in JavaScript mode. Just return that ResultSet (it's really a CachedRowSet if you use executeCachedQuery like that) and the channel will handle the rest, dispatching an XML representation of each row as discrete messages.
If you want to send all rows in the result set aggregated into a single message, that will be possible with the Database Reader very soon: MIRTH-2337
Mirth Connect 3.5 will be released next week so you can take advantage of it then. But if you can't wait or don't want to upgrade then you can still do this with a JavaScript Reader:
var processor = new org.apache.commons.dbutils.BasicRowProcessor();
var results = new com.mirth.connect.donkey.util.DonkeyElement('<results/>');
while (ifxResults.next()) {
var result = results.addChildElement('result');
for (var entries = processor.toMap(ifxResults).entrySet().iterator(); entries.hasNext();) {
var entry = entries.next();
result.addChildElement(entry.getKey(), java.lang.String.valueOf(entry.getValue()));
}
}
return results.toXml();
I know this question is kind of old, but here's an answer just for the record.
For this answer, I'm assuming that you are using a Source connector type of JavaScript Reader, and that you're trying to use channelMap in the JavaScript Reader Settings editing pane.
The problem is that the channelMap variable isn't available in this part of the channel. It's only available in filters and transformers.
It's possible that what you want can be accomplished by using the globalChannelMap variable, e.g.
globalChannelMap.put("results", ifxResults);
I usually need to do this when I'm processing one record at a time and need to pass some setting to the destination channel. If you do it like I've done in the past, then you would first create a globalChannelMap key/value in the source channel's transformer:
globalchannelMap.put("ProcID","TestValue");
Then go to the Destinations tab and select your destination channel to make sure you're sending it to the destination (I've never tried this for channels with multiple destinations, so I'm not sure if anything different needs to be done).
Destination tab of source channel
Notice that ProcID is now listed in the Destination Mappings box. Click the New button next to the Map Variable box and you'll see Variable 1 appear. Double click on that and put in your mapping key, which in this case is ProcID.
Now go to your destination channel's source transformer. There you would enter the following code:
var SentValue = sourceMap.get("ProcID");
Now SentValue in your destination transformer has whatever was in ProcID when your source channel relinquished control.

Categories

Resources