I am learning how to create a simple CRUD Node Express Web Application using APIs.
When my index.js page loads I have a default html table which displays a record of data populated within the Function:
document.addEventListener('DOMContentLoaded', function () {
fetch('http://localhost:3000/getQuestion/+ID)
.then(response => response.json())
.then(data => loadHTML(data['data']));
});
Inside this function I made a Get API call to return the data which populates the table with a record stored in my Database.
I run into a problem when I want to update this table with the next record. When I get the next record, I increment my ID by 1.
But in order to display the new record into the table, I have to also perform a location.reload() to refresh the content within the table with the new data; This in turn clears out my ID indexes.
My questions are:
Where is the correct place within a js file to display the table data for my html table. Does it make sense to put it in: document.addEventListener('DOMContentLoaded function ?
If so, then every time this table data changes I have to call a page refresh. Is this correct?
Doing the page refresh clears out all my variables that are used to populate the parameters of my APIs
Where am I going wrong here.
Where is the correct place within a js file to display the table data for my html table. Does it make sense to put it in: document.addEventListener('DOMContentLoaded function ?
Yes and no. DOMContentLoaded is almost never needed outside library development. Instead, to make your code run only after the page has been parsed and the DOM built, do one of these things:
Use a module rather than a script (<script type="module" ...>) if your target browsers support it; or
Use defer on your non-module script tag; or
Put your non-module script tag at the end of the HTML, just before the closing </body> tag
If you do any of those things, your page's DOM is ready to be manipulated by your code by the time your code runs and you could just do the fetch without wrapping it in an event. Using a module also has the benefit that your top-level variables aren't globals and module code is always in strict mode.
But doing it in a DOMContentLoaded handler is fine too.
If so, then every time this table data changes I have to call a page refresh. Is this correct?
There are other ways to change or add to the table data. You haven't shown loadHTML, but you can use appendChild on the table or tbody element, insertAdjacentHTML, insertAdjacentText, etc. More on MDN.
Doing the page refresh clears out all my variables that are used to populate the parameters of my APIs
Yes, refreshing the page completely resets the global environment and all nested environments, so everything on the page is refreshed.
Side note: There are two minor problems with the fetch code you've shown:
I'm afraid it's falling prey to the fetch API footgun. fetch only rejects its promise on network error, not HTTP error. So a 404, 500, etc., doesn't reject the promise. You have to check for HTTP success explicitly (which is why I always use a wrapper function).
You're not handling errors.
To fix both:
fetch('http://localhost:3000/getQuestion/+ID)
.then(response => {
if (!response.ok) {
throw new Error("HTTP error " + response.status);
}
return response.json();
})
.then(data => loadHTML(data['data']))
.catch(error => {
// Handle/report error
});
Related
Im using vue.js with nuxt.js, I'm just still confused as when to use Data VS Async Data. Why would I need to use Async data when I just have data that just displays on the page?
I have a data object of FAQ's and just want to display the data without doing anything with it. What are the benefits of using the asyncData? Or what are the cases or best use of them?
Should I display list data such as this as async by default if using data such as this inside of my component?
Data
data:() => ({
faqs:[
{"title":"faq1"},
{"title":"faq2"},
{"title":"faq3"},
]
}),
asyncData
asyncData(context) {
return new Promise((resolve, reject) => {
resolve({
colocationFaqs:[
{"title":"faq1"},
{"title":"faq2"},
{"title":"faq3"},
]
});
})
.then(data => {
return data
})
.catch(e => {
context.error(e);
});
},
asyncData happes on the serer-side. You cant access browser things like localStorage or fetch() for example but on the ther hand you can access server-side things.
So why should you use asyncData instead of vue cycles like created?
The benefit to use asyncData is SEO and speed. There is this special context argument. It contains things like your store with context.store. Its special because asyncData happens on server-side but the store is on the client side usually. That means you can get some data and then populate your store with it and somewhere else you display it. The benefit of this is that its all server-side and that increase your SEO so for example the google crawler doesnt see a blank page
why would I need to pre render it when it is going to be displayed
anyway
Yes for us it doesnt matter if i send 1 File to the client and it renders all data like in SPA's or if its pre-rendered. But it doesnt matter for the google crawler. If you use SPA mode the crawler just sees a blank page. You can discoverd it too. Go to any SPA website and click right-click and inspect you will see thats there only 1 Div tag and few <script> tags. (Dont press F12 and inspect like this thats not what i mean).
I have Tabulator nearly working as I need for a web application I am designing. This app is calling web services in a backend app written in Java.
I created an InitialFilter set, the filtering, sorting, and pagination is handled by the backend. Next, I am creating an Accordion control for the various filter inputs by the end-user. No issues yet. I created two buttons, one to Apply the filter based on the user preferences, and another to Reset/Clear the filter parameters.
The Tabulator object is already created and has the default data already showing on the page. When the user sets the custom filter and clicks the Apply button, a JQuery function captures the on-click event and executes the following code.
$(function(){
$('#btn-apply').on('click', function(e){
// handle click event of button
// Get values first
var subFrom = $('#txt-submission-from').val();
var subTo = $('#txt-submission-to').val();
// Set filters
NIBRSTable.clearFilter();
NIBRSTable.addFilter("submissionPeriod", ">=", subFrom);
NIBRSTable.addFilter("submissionPeriod", "<=", subTo);
// Call function to load data
NIBRSTable.setData();
});
});
Error Returned
Ajax Response Blocked - An active ajax request was blocked by an
attempt to change table data while the request was being made
tabulator.min.js:5:24222
I have tried commenting out one source line at a time. It appears the setFilter() calls are causing the Ajax Response Blocked error even though there is not anything actively occurring (the tabulator DOM is already loaded)
I have many more items for which the end-user may filter. The two filters shown in the code listing above are just a start.
That isn't an error message, that is just a console warning.
What it means is that multiple ajax requests have been made in quick succession and that one request has been made before the first one returned, therefore the response of the first request will be ignored so the table isn't partially redrawn.
In this case it is being triggered because you are calling the addFilter function twice in quick succession which is triggering the ajax request twice with the second filter being added before the first ajax request has been sent. (there is also no need to call the setData function, adding a filter when ajaxFiltering is enabled will automatically trigger the request).
To avoid this double ajax request you could pass an array of filter objects into the addFilter function and only call it once:
NIBRSTable.addFilter([
{
field:"submissionPeriod",
type:">=",
value:subFrom
},
{
field:"submissionPeriod",
type:"<=",
value:subTo
},
]);
Oli,
Thank you for the detailed response. Since the filters are dynamic and set by the end-user (i.e. cannot be hardcoded), I created an Object and conditionally adding the filter parameters. Using this object, I can call the NIBRSTable.addFilter(userFilter) and it works like a charm! I did make the mistake of trying to JSON Stringify the object and passing it to the addFilter method, but quickly learned JSON Stringify was unnecessary since the object array was already a JSON object.
Oddly, though I am still receiving a single warning "Ajax Response Blocked" even though there were no pending Ajax actions. I only have one .addFilter() and removed the .setData() as you responded. I will ignore for now since the filtering is working!
Ben
I created a small sample application using VueJs and created a C# REST API to store and retrieve data in a SQL Server back end.
For testing, I created a simple web page with a form to create a "note". The note is stored by the following function, 'saveData()':
saveData()
{
let promiseStack = [];
var jsondata = JSON.stringify(this.note);
promiseStack.push(this.$http.post('REST_API/note', jsondata));
Promise.all(promiseStack).then(data =>
{
this.$http.get('REST_API/note');
this.$router.push({ name: 'viewnotes', params: { id: data[0].body.id }})
}, error =>
{
console.log(error);
});
}
I tried to use a promise to wait until the 'store' operation in the backend is complete, and issue a GET request to retrieve all notes once the promise is fulfilled.
However, the get request inside the promise doesn't return any data. If I issue the get request manually later one, I retrieve the data that was stored previously.
So I had look into the C# REST API. There are currently two functions: createNote(...), getAllNotes(...). I used a StreamWriter to log to the filesystem when these functions are called, using milisecond precision. What I see is that 'createNote' is called after 'getAllNotes'. So I suspect that the API is working correctly, but something with the way I'm using promises seems to be awfully wrong.
Maybe somebody has a hint?
UPDATE
I know that the GET request doesn't return any data by using the developer toolbar in Chromium. The response is empty
The developer toolbar in the network tab shows that the requests are submitted in the correct order, so the "POST" request is issued first
It seems I found the problem. I had a 'href' tag in my 'Save' link, which triggered an early routing. The intended 'POST' and 'GET' were fired correctly, but there was another 'GET' inbetween somewhere because of the 'href' tag in the link, even though it was empty.
I removed the tag, now it works as intended.
I have a dynamic action in Oracle APEX PAGE, that executes PL/SQL CODE (a stored procedure that creates a BLOB)
So when the users clicks on it, the javascript behind (dynamic actions, 'runs' the PL and the page is 'locked' until everything finishes)
How can I make something that doesn't lock the browser entirely until this process is finished?
Thank you
One straight-forward way, if you haven't moved to version 5 yet is to use the APEX_PLSQL_JOB package.
Define the functionality you need inside the database, in a function or procedure in a package.
Then, start the function in the background using the API.
For example, if you have defined a procedure upload_pending inside a package PROJECTX_IO, then you can start it with:
declare
p_job number
begin
p_job := apex_plsql_job.submit_process(
p_sql => 'BEGIN PROJECTX_IO.upload_pending; END;',
p_status => 'Uploading');
end;
You are currently not doing anything with the job id returned in p_job. What works, if you know you only have a single background job running at any time, is to add it to a one-row table in the database. This simply goes to the bottom of the code snippet above (before the end;):
delete from t_job;
insert into t_job values ( p_job );
Then, inside the background process, you have access to the job id of the process. You could change the job status at the end with:
select job into p_job from t_job;
apex_plsql_job.update_job_status(p_job,
'Upload complete');
How to deal with multiple background jobs?
This may be a long shot but I'm looking for someone who has worked with the Tealium UDO (Universal Data Object). I have a search page with a Google Search Appliance, my utag_data object in the data layer that looks like this:
var utag_data = {
"country":"US",
"language":"EN",
"search_keywords": "blahblah",
"search_results": "0"
}
The problem here is the search_results property has not had enough time to wait for the real results number to load so it is defaulting to 0 instead of the real number 1200. I've read Tealium's documentation around the utag.view() and utag.link() and want to use one of these to update the search_results tag. I tried:
utag.link({'search_results':'1200'});
and
utag.view(utag_data,null,[12]);
where 12 is the UID of the tag in Tealium but when using Omnibug in firefox I'm not seeing any updated values, but it is sending the click event to AT Internet.
Does anyone have any experience with this? Thank you in advance
You can either wait to call the main utag.js Tealium script, or send along another data point using utag.link or utag.view. It is not possible to "update" the initial utag_data object once sent.
These methods are used to handle sending dynamic events/data. See additional discussion on the Tealium blog at ajax tracking.. when urls no longer change
From utag.link() and utag.view() on Tealium Learning
Syntax
The link and view methods allow you to pass three different parameters:
parameter 1: a JSON object
utag.view({'search_results':'1200'});
parameter 2: a callback function (optional, may be set to null)
parameter 3: an array of Tags (optional: if used, these are the only Tags that will fire)
utag.link(
{'search_results':'1200'},
function(){alert("Only fired tag 12 with this call");},
[12]
);
Notes:
The utag_data object declared on initial page landing is not re-purposed with these calls. If data from the initial page landing needs to be used it will need to be re-declared and passed again in the method call. For example, if language:"en" was passed on page landing, if language is needed for a tag fired by a utag method call then language will need to be passed again.
utag.view() calls should not be called on initial page load - they should only exist in the dynamic content loaded within the page.
Global and Tag-scoped Extensions are executed during these calls. Pre-loader and DOM Ready extensions will not executed during these calls.