I've been trying to set up a system where a user inputs data for a proposal, the data is saved to the database, then (when needed), the system will automatically place the data into tagged fields in a pre-made document. I have everything done as far as data entry is concerned, I just haven't been able to find any info on how to auto populate the tagged fields in the document itself.
This might not be exactly what you are looking for but it sounds like you want to use MS Word fields.
http://office.microsoft.com/en-ca/word-help/field-codes-in-word-HA010100426.aspx
Edit:
Sorry misread the question. You one option is to do a mail merge. In Word 2010 select the mailings tab then the start mail merge pull down and follow the wizard. Super easy way to pull DB info into individual documents and you can easily select which records you want printed/created.
Related
My plan is to set up a foreign-language-learning note-taking website using
python django framework.
The website mainly features these characteristics:
user can type in paragraphs in two different languages and match them later on( I can solve this by myself so you don't need to answer this)
the website detects the events of user highlighting some words/sub-sentences( I can imagine how to solve this by myself so you don't need to answer this)
user can add comments on what they've highlighted; however I don't have a concrete plan on some best practice of saving and re-displaying what the users have done to their added notes/comments. To be exact, I am meaning that when a user later opens his/her saved notes, how do I design my database schema so that when he/she hovers on one vocabulary/phrase, the website automatically responds him/her the note which was saved before?(For example, do I need to memorize the position of each words?)
The answers I need could be in the format of codes or some detail concepts explanations.
One resource friendly approach can be to save the word number(s) against saved note(s).
Like for a paragraph say
We love stackoverflow. A wonderful place for community to interact.
And suppose you want to save note "I should replace wonderful with some other word" for "wonderful place". Then you schema might look like.
That way you can fetch only notes for the current word count range getting displayed plus you can support multi-comment.
Only thing you have to manage is if somebody come and edit the text then how you want to handle it like when somebody deletes 1 word you want to retain the note or update the note with any new word in its place.
I'm looking for suggestions on how to go about handling the following use case scenario with python django framework, i'm also open to using javascript libraries/ajax.
I'm working with pre-existing table/model called revenue_code with over 600 million rows of data.
The user will need to search three fields within one search (code, description, room) and be able to select multiple search results similar to kendo controls multi select. I first started off by combining the codes in django-filters as shown below, but my application became unresponsive, after waiting 10-15 minutes i was able to view the search results but couldn't select anything.
https://simpleisbetterthancomplex.com/tutorial/2016/11/28/how-to-filter-querysets-dynamically.html
I've also tried to use kendo controls, select2, and chosen because i need the user to be able to select as many rev codes as they need upward to 10-20, but all gave the same unresponsive page when it attempted to load the data into the control/multi-select.
Essentially what I'm looking for is something like this below, which allows the user to select multiple selections and will handle a massive amount of data without becoming unresponsive? Ideally i'd like to be able to query my search without displaying all the data.
https://petercuret.com/add-ajax-to-django-without-writing-javascript/
Is Django framework meant to handle this type of volume. Would it be better to export this data into a file and read the file? I'm not looking for code, just some pointers on how to handle this use case.
What the basic mechanism of "searching 600 millions"? Basically how database do that is to build an index, before search-time, and sufficiently general enough for different types of query, and then at search time you just search on the index - which is much smaller (to put into memory) and faster. But no matter what, "searching" by its nature, have no "pagination" concept - and if 600 millions record cannot go into memory at the same time, then multiple swapping out and in of parts of the 600 millions records is needed - the more parts then the slower the operation. These are hidden behind the algorithms in databases like MySQL etc.
There are very compact representation like bitmap index which can allow you to search on data like male/female very fast, or any data where you can use one bit per piece of information.
So whether Django or not, does not really matters. What matters is the tuning of database, the design of tables to facilitate the queries (types of indices), and the total amount of memory at server end to keep the data in memory.
Check this out:
https://dba.stackexchange.com/questions/20335/can-mysql-reasonably-perform-queries-on-billions-of-rows
https://serverfault.com/questions/168247/mysql-working-with-192-trillion-records-yes-192-trillion
How many rows are 'too many' for a MySQL table?
You can't load all the data into your page at once. 600 million records is too many.
Since you mentioned select2, have a look at their example with pagination.
The trick is to limit your SQL results to maybe 100 or so at a time. When the user scrolls to the bottom of the list, it can automatically load in more.
Send the search query to the server, and do the filtering in SQL (or NoSQL or whatever you use). Database engines are built for that. Don't try filtering/sorting in JS with that many records.
I have a doubt on saving data in database using ajax. (I can do this part). My problem is I have a very large form which is nested( I can't change it). Large form in terms of input fields which need to be saved in data approx 100 fields. new field may open it depends on users selected options.
ex- suppose one question is like which game you play. In multi select drop down if he selects one game than next questions would be how frequently you play this game .on which day which time and many more. Each game may have different set of question.
Now my problem is how to save this data in database. Should I save it after user click submit or should I save it in between user is feeling data. so that he refresh the data it can have his data filled .
How frequently should I send Ajax request for saving data and how to get the data from the fields which are newly field and how I should save it in Rails.
I know about update.attributes
please help me or give some suggestions how should I do it.
If you live edit or only on save has mostly user expierence questions.
But if you are frequently saving (like an autosave) and are worried about the size (100 normal columns might be OK anyway, allthough large text or blobs less so) then what you want to do fairly simply is only save the fields that have actually changed.
There are many ways to implement this in JavaScript. You might just save each input when the user finishes editing it (e.g. the input loosing focus) or you might save based on a timer and track the changed fields since last save.
Then have your JavaScript just include those fields in its AJAX request (PATCH migh be a good method to use). Rails should then only try to save the attributes on the object that you changed (via update_attributes, or save on the ActiveRecord). If you want to also optimise out the SELECT, use update or update_all on the class. e.g. ends up like:
MyBigRecord.update(id, title: "My new title")
You can easily use the normal strong parameters here, which only includes those actually present in params.
MyBigRecord.update(id, params.require(:my_big_record).permit(:title, :author, :etc))
If you need to deal with sub objects then you may need some special handling, but the idea is the same). A little bit of logic can also do the initial create on demand, allthough your JavaScript then recieve the id to use for future saves.
I am very new to MVC and need help how I should go about to create an application with the following features.
What I want to do
I have a form that asks some questions to users, and for each question users must be able to add as many items as they want as their answers.
For example, say the application is a recipe book management (Not my actual project), and a user enters the following information:
Recipe Name (e.g. "Orange Lemonade")
Description (e.g. "The best lemonade...")
Fruit Ingredients
Select fruit from an existing list or type fruit (e.g. lemon)
Enter quantity (e.g. 3)
NOTE: There are many other ingredients or lists such as Vegetable Ingredient list, etc, and they have to be entered the same way as Fruit Ingredient
The picture below simply describes the idea...
Here's the code I used: JSFIDDLE (I referred to this site)
My Question
How should I go about to store the list of ingredients like Fruit, Vegetable, and the actual application would have many more lists in one form.
Is there a way to call my repository methods to add/delete a list item every time an item is added/deleted?
Or should I save each list to an array and store them when the "Save" button for the form was clicked?
Edit
My question probably wasn't clear but I would like to know if there is a standard way or recommended way to achieve this. If there's none and whichever the method is fine, then I would like to call the repository every time user makes a change to the list. (Users must be able to save the recipe and come back to edit it until they submit the final version of the recipe.) And I'd like to know is how to call the method to save list items to DB from the client side in MVC (not using web controls like ScriptManager).
This question is hard to answer because it depends on your application. Which way you implement it is completely based on how your application would be used and the expectations from both you and the user.
Consider the following questions:
Do they have to finish the entire
recipe before saving it and having persisted?
Are you willing to live with incomplete recipes?
Do you expect it to take days to complete the task and thus will allow the user to save and continue later?
Is there a draft, in progress, complete, published type
status that the task will progress thru?
Based on the answers to these questions I think you will be able to answer your own question. The answers will elucidate your requirements and which way to go as far as architecture.
I'm working in nodejs, hosted at Heroku (free plan so far).
I get the data from elsewhere automatically (this part work fine and I get JSON or CVS), and my goal is do add them into a Prostresql DB.
While, I'm new to DB mangement and Postgresql, I've made my research before posting this. I'm aware that the COPY command exist, and how to INSERT multiple data without duplicate. But my problem is a mix of both (plus another difficulty).
I hope my question is not breaking the rules.
Short version, I need to :
Add lots of data a once
Never create duplicate
Rename column name between source data and my table
Long version with details :
The data I collect are from multiples sources (2 for now but will get bigger) and are quite big (>1000).
I also need to remap the column name to one unified system. What could be called "firstDay" on one source is called "dateBegin" in another, and I want them to be called "startDate" in my table.
If I'm using INSERT, I take care of this myself (in JS) while constructing the query. But maybe COPY could do that in a better way. Also, INSERT seem to have a limit of data you can push in one time, and so I will need to divide my query multiple time and maybe use callback or promise to avoid drowning the DB.
And finally, I will update this DB regularly and automatically and they will be a lot of duplicate. Hopefully, every piece of data has an unique id, and I have made a column PRIMARY KEY in the table that store this id. I thought it may eliminate any problem with duplicate, but I may be wrong.
My first version was very ugly (for loop making a new query a every loop) and didn't work. I was thinking about doing 1000 data at a time in a recursive way waiting for callback before sending another batch. It seem clunky and time expensive to do it that way. COPY seem perfect if I can select/rename/remap columns and avoid duplicated. I've read the documentation and I don't see a way to do that.
Thank you very much, any help is welcome. I'm still learning so please be kind.
I have done this before using temporary tables to "stage" your data and then do an INSERT SELECT to move the data from staging to your production table.
For populating your staging table you can use bulk INSERTs or COPY.
For example,
BEGIN;
CREATE TEMPORARY TABLE staging_my_table ( // your columns etc );
// Now that you have your staging table you can bulk INSERT or COPY
// into it from your code, e.g.,
INSERT INTO staging_my_table (blah, bloo, firstDay) VALUES (1,2,3), (4,5,6), etc.
// Now you can do an INSERT into your live table from your staging, e.g.,
INSERT INTO my_table (blah, bloo, startDate)
SELECT cool, bloo, firstDay
FROM staging_my_table staging
WHERE NOT EXISTS (
SELECT 1
FROM mytable
WHERE staging.bloo = mytable.bloo
);
COMMIT;
There are always exceptions, but this might just work for you.
Have a good one