Javascript function taking too long to complete? - javascript

Below is a snipet of code that I am having trouble with. The purpose is to check duplicate entries in the database and return "h" with a boolean if true or false. For testing purposes I am returning a true boolean for "h" but by the time the alert(duplicate_count); line gets executed the duplicate_count is still 0. Even though the alert for a +1 gets executed.
To me it seems like the function updateUserFields is taking longer to execute so it's taking longer to finish before getting to the alert.
Any ideas or suggestions? Thanks!
var duplicate_count = 0
for (var i = 0; i < skill_id.length; i++) {
function updateUserFields(h) {
if(h) {
duplicate_count++;
alert("count +1");
} else {
alert("none found");
}
}
var g = new cfc_mentoring_find_mentor();
g.setCallbackHandler(updateUserFields);
g.is_relationship_duplicate(resource_id, mentee_id, section_id[i], skill_id[i], active_ind,table);
};
alert(duplicate_count);

There is no reason whatsoever to use client-side JavaScript/jQuery to remove duplicates from your database. Security concerns aside (and there are a lot of those), there is a much easier way to make sure the entries in your database are unique: use SQL.
SQL is capable of expressing the requirement that there be no duplicates in a table column, and the database engine will enforce that for you, never letting you insert a duplicate entry in the first place. The syntax varies very slightly by database engine, but whenever you create the table you can specify that a column must be unique.
Let's use SQLite as our example database engine. The relevant part of your problem is right now probably expressed with tables something like this:
CREATE TABLE Person(
id INTEGER PRIMARY KEY ASC,
-- Other fields here
);
CREATE TABLE MentorRelationship(
id INTEGER PRIMARY KEY ASC,
mentorID INTEGER,
menteeID INTEGER,
FOREIGN KEY (mentorID) REFERENCES Person(id),
FOREIGN KEY (menteeID) REFERENCES Person(id)
);
However, you can make enforce uniqueness i.e. require that any (mentorID, menteeID) pair is unique, by changing the pair (mentorID, menteeID) to be the primary key. This works because you are only allowed one copy of each primary key. Then, the MentorRelationship table becomes
CREATE TABLE MentorRelationship(
mentorID INTEGER,
menteeID INTEGER,
PRIMARY KEY (mentorID, menteeID),
FOREIGN KEY (mentorID) REFERENCES Person(id),
FOREIGN KEY (menteeID) REFERENCES Person(id)
);
EDIT: As per the comment, alerting the user to duplicates but not actually removing them
This is still much better with SQL than with JavaScript. When you do this in JavaScript, you read one database row at a time, send it over the network, wait for it to come to your page, process it, throw it away, and then request the next one. With SQL, all the hard work is done by the database engine, and you don't lose time by transferring unnecessary data over the network. Using the first set of table definitions above, you could write
SELECT mentorID, menteeID
FROM MentorRelationship
GROUP BY mentorID, menteeID
HAVING COUNT(*) > 1;
which will return all the (mentorID, menteeID) pairs that occur more than once.
Once you have a query like this working on the server (and are also pulling out all the information you want to show to the user, which is presumably more than just a pair of IDs), you need to send this over the network to the user's web browser. Essentially, on the server side you map a URL to return this information in some convenient form (JSON, XML, etc.), and on the client side you read this information by contacting that URL with an AJAX call (see jQuery's website for some code examples), and then display that information to the user. No need to write in JavaScript what a database engine will execute orders of magnitude faster.
EDIT 2: As per the second comment, checking whether an item is already in the database
Almost everything I said in the first edit applies, except for two changes: the schema and the query. The schema should become the second of the two schemas I posted, since you don't want the database engine to allow duplicates. Also, the query should be simply
SELECT COUNT(*) > 0
FROM MentorRelationship
WHERE mentorID = #mentorID AND menteeID = #menteeID;
where #mentorID and #menteeID are the items that the user selected, and are inserted into the query by a query builder library and not by string concatenation. Then, the server will get a true value if the item is already in the database, and a false value otherwise. The server can send that back to the client via AJAX as before, and the client (that's your JavaScript page) can alert the user if the item is already in the database.

Related

Methods for tracking changes when making realtime updates to a webpage

I'm looking to update a list of orders (and statuses) real-time on a webpage. The orders in the (MySQL) database are updated asynchronously through other processes (PHP).
I'm familiar with the mechanics of pushing data to pages (polling, event-source). This is not about that.
What I'm struggling with is figuring out exactly what data to push for each user without
needlessly updating list entities that don't need to be
not missing an update.
My table does have a DateTime column last_update_date that I update when there are any changes to the order. I know MySQL doesn't really have any event triggers that can trigger other code.
Ideas so far:
In my JS I could track the time of the last request and on every subsequent request, ask for data since that time. This doesn't work because JS time will most likely not match server MySQL time.
The same could probably done storing the server time in the user session. I feel like this would probably work most of the time, but depending on the timing of the DB update and the requests, changes could be missed since the DB only stores a DateTime with a precision of 1 second.
I'm sure there's a more atomic way to do this, I am just drawing a blank though. What are suitable design patterns for this?
You are correct that you must poll your database for changes, and that MySQL can't push changes to other applications.
The trick is to use server time throughout for your polling. Use a table to keep track of polling. For example, suppose your users have user_id values. Then make a poll table consisting of
user_id INT primary key
polldate DATETIME
Then, when you poll do this sequence.
First make sure your user has an entry in the poll table showing a long-ago polldate. (INSERT IGNORE doesn't overwrite any existing row in the table.)
SET #userid := <<your user's id>>;
INSERT IGNORE INTO poll (user_id, polldate) VALUES (#userid, '1970-01-01')
Then when you poll, do this sequence of operations.
Lock the poll row for the user:
BEGIN TRANSACTION;
SELECT polldate INTO #polldate
FROM poll
WHERE user_id = #userid
FOR UPDATE;
Retrieve the updated rows you need; those since the last update.
SELECT t.whatever, t.whatelse
FROM transaction_table t
JOIN poll p ON t.user_id = p.user_id
WHERE user_id = #userid
AND t.last_update_date > p.polldate;
Update the poll table's polldate column
UPDATE poll p
SET p.polldate = IFNULL(MAX(t.last_update_date), p.polldate)
FROM transaction_table t
JOIN poll_p ON t.user_id = p.user_id
WHERE user_id = #userid
AND t.last_update_date > p.polldate;
And commit the transaction.
COMMIT;
Every time you use this sequence you'll get the items from your transaction table that have been updated since the preceding poll. If there are no items, the polldate won't change. And, it's all in server time.
You need the transaction in case some other client updates a transaction table row between your SELECT and your UPDATE queries.
The solution O.Jones provided would work for making tracking updates atomic, though where it fails is if the following scenario occurs all within one second:
An order update is written to the table (update 1)
A poll action occurs
An order update is written to the table (update 2)
In this scenario, the next poll action will either miss update 2, or will duplicate update 1, depending on if you use > or >= in your query. This is not the fault of the code, it's a limitation of the MySql datetime type having only 1 second resolution. This could be somewhat mitigated with MySql v8 as it has Fractional Seconds Support though this still would not guarantee atomicity.
The solution I ended up using was creating a order_changelog table
CREATE TABLE 'NewTable' (
'id' int NULL AUTO_INCREMENT ,
'order_id' int NULL ,
'update_date' datetime NULL ,
PRIMARY KEY ('id')
);
This table is updated any time a change to an order is made essentially numerating every update.
For the client side, the server stores the last ID from order_changelog that was sent in the session. Every time the client polls, I get all rows from order_changelog that have an ID greater than the ID stored in the session and join the orders to it.
$last_id = $_SESSION['last_update_id'];
$sql = "SELECT o.*, c.id as update_id
FROM order_changelog c
LEFT JOIN orders o ON c.order_id = o.id
WHERE c.id > $last_id
GROUP BY o.id
ORDER BY order_date";
I now am guaranteed to have all the orders since last poll, with no duplicates, and I don't have to track individual clients.

ArrayCollection (Collection of forms) index collision in Symfony 2

I am using Symfony2 to build up my page.
When I try to update a collection of forms (like described in the cookbook entry "How to Embed a Collection of Forms"), i get a collision of the indexes of the frontend and the indexes of the ArrayCollection in the backend.
I've got the relation User <-> Address (OneToMany). A user wants to create/update/delete his addresses, therefore he can add / delete in the frontend with the help of the javascript part new address elements. He does the following:
(1) Adds new address (has index: 0)
(2) Adds new address (has index: 1) and instantly removes this address again
(3) Adds new address (has index: 2).
When he clicks on save button, the following code saves/updates the user (and its addresses):
$this->em->persist($user);
$this->em->flush();
New addresses for example are then correctly persisted to the database.
Now the user wants to update the address e.g. with index 0.
When he now clicks on the save button, it updates the adress with "index 0", but at the same time, it adds again the address with "index 2" to the database (object).
To better understand the problem, i've drawn a small illustration (handmade, sorry for my bad art skills):
Now , i've got two times the address with "index 1" within my object / database.
I know why this happens, it's because the first "index 1" address gets mapped to the ArrayCollection element "number 1", and the second gets mapped to "number 2 "(because of the frontend name "index 2").
You can say: "it just fills up the addresses, until it reaches the frontend index in the backend"..
But how can I fix this behaviour ?
Site note:
This behaviour occurs using ajax requests, because if you would reload the page after clicking "save button", it would reindex the addresses in the frontend correctly with the indexes in the backend.
My suggestion to handle that situation:
Reindexing the frontend indexes after clicking save with the server side
indexes. Is this a clear / the only solution for my problem?
Yes, this is problem of Symfony form collection and it has no easy solution imho. But I have to ask why don't you do exactly the same thing what page refresh does? You can refresh only html snippet with collection. HTML code for snippet can come from server-side. Back to your question - yes, reindexing is good solution until you do not want to try write custom collection type on your own.
symfony/symfony/issues/7828
There is similar problem with validating in collection - symfony/symfony/issues/7468.
Well I think default collection type and the tutorial in Symfony docs has the some drawbacks. Hope that's help.
I have come round this issue on the client side by modifying the Javascript/Jquery code given in the Symfony Documentation.
Instead of numbering the new elements by counting the sub-elements, I am looking at the last element's id and extracting its index with a regular expression.
When adding an element, I am incrementing the last index by 1. That way, I never use the same index.
Here is my code :
// Initializing default index at 0
var index = 0;
// Looking for collection fields in the form
var $findinput = $container.find(':input');
// If fields found then looking for last existing index
if ( $findinput.length > 0 ) {
// Reading id of last field
var myString = $findinput.last().attr('id')
// Setting regular expression to extract number from id containing letters, hyphens and underscores
var myRegex = /^[-_A-Za-z]+([0-9]+)[-_A-Za-z]*$/
// Executing regular expression on last collection field id
var test = myRegex.exec(myString);
// Extracting last index and incrementing by 1
if (test.length > 0) index = parseInt(test[1]) + 1;
}
I ran into this problem a couple of times during the past two years. Usually, following the Symfony tutorial How to Embed a Collection of Forms does the job just fine. You need to do a little bit javascript coding to add the "edit/update" functionality, but other than that - you should be just fine using this approach.
If, on the other hand, you have a really complex form which uses AJAX to validate/save/calculation/business logic/etc, I've found it's usually a better to store the final data into an array in the session. After submitting the form, inside the if($form->isValid()){...} block, you would have
$collection = new ArrayCollection($mySessionPlainArray);
$user->setAddress($collection);
I would like to warn you to be careful with the serialization of your data - you might get some awkward exceptions or misbehavior if you're using entities (see my question).
I'm sorry I can't provide more code, but the solution to this problem sometimes is quite complex.

Breezejs Unique Constraint with Delete

I have the following table:
CREATE TABLE Foo AS (
Id int not null primary key,
YesNo char(1) not null default('N')
)
That has the following constraint - "one and only one row may have the Value 'Y'"
CREATE UNIQUE NONCLUSTERED INDEX [IX_YesNo] ON [dbo].[Foo]
(
[YesNo] ASC
)
WHERE ([YesNo]=('Y'))
The application code (Breeze JS) enforces that one row is always 'Y'. So if you Delete the Row with YesNo = 'Y', the BLL sets another Row's YesNo field to be Y.
origEntity.entityAspect.setDeleted();
otherEntity.YesNo('Y');
When performing the actual DB operations, Breeze is FIRST updating the other row to Y, prior to perfoming the delete of the original. Which violates the unique constraint. Is there an easy way to make the DELETE happen first or do I need special server side delete handling?
Breeze does not control the order of operations performed on the server. You didn't say what technology you're using on the server but the question tags tell me it is EF and SQL Server. In that case, it is EF that is doing the updates before the delete.
I wish there was a way to tell EF what to do. That is not possible so far as I know.
You can take over and it isn't hard to do so especially if you can isolate this sequence of operations from others. Take a look at the beforeSave... methods. If you need both parts of the save inside the same transaction (likely), learn how to set up your own ambient transaction so that you can make two calls to EF (or the database directly), one to do the deletes, and the other to do the updates.

live updating total num of results while user fills in the form

i am currently working a project and the client is asking for a feature which is going to require the help of javascript - which i'm no expert with, i can do basics etc but don't really know where to start with this, on the site there is a form (like an advanced search) and on the right it should show the total number of results, but it needs to keep updating as the form is filled in so as the user goes through filling in the form the total number of results is updated to reflect...
I thought maybe it could be done with ajax, passing along the contents/value of the input, performing a query then passing back the total num of results, but how would it work for every input, isn't that just going to be overkill, i tried to tell the client it would be a strain on the server (which surely it would be) but they seem dead-set on having it...
Any help or techniques would be very useful, or if you have come across something like this before please do let me know.
Thanks
Using jQuery you could attach a function to the blur() event of each input control that is used in the 'advanced search' to perform the ajax call to the server and get the current number of results.
This way the ajax server call will only fire each time an input field is completed and the focus moves elsewhere.
Of course if you have many fields this will result in a call each time a field is completed or amended and the focus is moved. It would also be wise to ensure that the field value has changed before making the ajax call. Something along the lines of:
var tempVal = "";
// Each field that is used in the advanced search will need to have the
// advancedSearchInput class
$('.advancedSearchInput').focus(function() {
tempVal = $(this).val();
});
$('.advancedSearchInput').blur(function() {
if($(this).val() == tempVal) {
// Get advanced search values and make ajax call
}
});
I tend to avoid this sort of thing as like you say it can be strain on the server. Here are few things I do to help reduce this risk:
Only perform the search/count when the user has entered at least 3 characters
Cache search terms and their resultant count in a memory store like Redis or memcached
Use MySQL full-text searching if you are running a MySQL DB
Another option is to present fuzzy numbers. So you get the total number of records when the page loads and then as the user types you randomly take a chunk off the total. Once the users term is more specific or they actually click the submit button you can execute actual Ajax requests to get the actual count.
So the user would see something like:
Search: ""
Response: There are about 3097 records relevant to your search
Search: "Ap"
Response: There are about 1567 records relevant to your search
Search: "Apple"
issues ajax request
Response: There are 542 records relevant to your search

Related Parameters in HTML

I have a table of rows and columns on an HTML-based entry form that allows the user to edit multiple records. Each row corresponds to a database record and each column to a database field.
When the user submits the form, the server needs to figure out which request parameter belongs to which row. The method I've been using for years is to prefix or suffix each HTML input element's name to indicate the row it belongs to. For example, all input elements would have the suffix "row1" so that the server would know that request parameters whose names end with "row1" are field values for the first row.
While this works, one caveat of the suffix/prefix approach is that you're adding a constraint that you can't name any other elements with a particular suffix/prefix. So I wonder if there's a better, more elegant approach. I'm using JSP for the presentation layer, by the way.
Thanks.
I don't know JSP very well, but in PHP you would define your input fields' names with an array syntax.
<input name='person[]'>
<input name='person[]'>
<input name='person[]'>
When PHP receives a form like that, it gives you an array (within the standard $_POST array), thus:
$_POST['person']=array('alice','bob','charlie');
Which makes it very easy to deal with having as many sets of fields as you want.
You can also explicitly name the array elements:
<input name='person[teamleader]'>
<input name='person[developer1]'>
would give you an array with those keys. If your current prefixes are meaningful beyond simply numbering the records, this would solve that problem.
I don't know whether the identical syntax would work for JSP, but I imagine it would allow something very similar.
Hope that helps.
Current user agents send back the values in the order of the fields as presented to the user.
This means that you could (theoretically) drop the prefix/suffix altogether and sort it out based on the ordering of the values. You'd get something like
/?name=Tom&gender=M&name=Jane&gender=F&name=Roger&gender=M
I don't know how your framework returns that, but many return it as lists of each value
name = [Tom, Jane, Roger]
gender = [M, F, M]
If you pop an element off of each list, you should get a related set that you can work with.
The downside to this is that it relies on a standard behavior which is not actually required by the specification. Still... it's a convenient solution with a behavior that won't be problematic in practice.
When browsers POST that information back to the server, it is just a list of parameters:
?name_row1=Jeff&browser_row1=Chrome&name_row2=Mark&browser_row2=IE8
So really, I think you can answer a simpler question: how do you relate keys in a key-value list?
Alternatively, you can go to a more structured delivery method (JSON or XML), which will automatically give you a structured data format. Of course, this means you'll need to build this value on the browser first, then send it via AJAX (or via the value of a hidden input field) and then unpack/deserialize it in the server code.
XML:
<rows>
<row><id>1</id><name>Jeff</name><browser>Chrome</browser></row>
<row>...</row>
</rows>
or JSON:
[{ "name":"Jeff", "browser":"Chrome"}, { "name":"Mark", "browser":"IE8" }]
There are many resources/tutorials on how to do this... Google it. Or go with the ostensible StackOverflow consensus and try jQuery.

Categories

Resources