I attempting my first SPA.
It will be a HTML representation of the model of our database structure to give to clients to look through the model and do queries of the database model (not the database data itself).
The requirement is then for no updates and the SPA will be shipped with the release and thus will be offline. Currently it is a static HTML page.
My question is - is there a way to use breeze to query the json file I've created that describes the model? All I've seen are examples of the EntityManager being initialised with a service URL - that will return the data.
Not quite sure I understand the question. What do you mean by (no server)?. Does this mean that you want to bring all of the data down just once and then query it locally?
If the data that you want to query is actually itself metadata then if you describe the structure of the metadata (i.e. metadata of metadata) in Breeze's native metadata format, then you should be able to query the metadata itself via Breeze's EntityQuery.
Probably a little more info would be helpful.
Also, take a look at the Breeze NoDb sample for an example of "custom" metadata construction.
Related
for a project at university we are working on an application that is supposed to automatically create a file for the user after having queried several information from the user. The general idea is to use Decision Model and Notation "DMN" to perform the query and collect the information needed. The file input depends on the answers provided by the user. The application is further intended to be web-based.
My question is therefore, how we can put the strings that result from the DMN query into a PDF template that is ready to print/send? The template is currently set up to be a text document (.docx) that has several input fields that need to be filled.
Thanks!
You can use Kogito for the DMN execution side; it is JVM based but it exposes for you automatically generated REST (JSON) endpoints to evaluate the DMN model. Based on the requirements you listed, this should an easy way to achieve the DMN evaluation part; that is for Kogito you drop the .dmn model file into the src/main/resources directory and it will automatically provide for you a cloud-native based application exposing the REST endpoint.
Further, the outcoming JSON payload (of the DMN evaluation results) could be fed into a template engine, in order to generate from the JSON of result the final PDF leveraging conversion from a more friendlier target. For instance, this could have also been done with Apache FreeMarker/Velocity template engine. You could use as target HTML or ODF, and finally achieve the final PDF conversion.
If I use a json file stored in one of my GitHub repos as a mock backend, I know how to fetch and read all the data. Is it also possible to edit or post new data to this json file? Would an alternative mock backend like Mocky.io be a better solution (to achieve full CRUD)?
I think you could store the information inside csv files or somehting like that, you would be recreating a database engine as MongoDB & create your own reader to find the info, or you could store the users info using local Storage,
However this would make your app very limited.
Here's the link for the documentation of local storage
https://developer.mozilla.org/es/docs/Web/API/Window/localStorage
Well if you want to try out CRUD operations you can use free JSON APIs like http://jsonplaceholder.typicode.com/ or
https://mockfirst.com/
where you can create, read, update and delete data using various api end points. It is better to go this way first then you could move on to updating a JSON file.
(UPDATE)
You can use https://jsonbin.io/
Here you can place your own data and use it as an API.
Since data set in SpagoBI could be created using scripts, I need to connect, query my MongoDB data base using javascript (or Groovy).
I need to use scripts to be able to execute aggregation on the mongoDB data, I can't use aggregation directly on my MongoDB because my data type is String
I dont know how to access my Database using scripts
Any ideas?
You should create a Mongo dataset. The steps to create are:
Step1: Create a Mongo datasource in the administrator console. Notes: the type must be JDBC and the value for Class input field must be "mongo"
JDBC: {unit_host}:{port}/${db}
CLASS: mongo
Step2: now you can create a dataset. The procedure is the same of the query datasets. The difference here is the language.. JS instead of SQL.
Take a look at the SpagoBI wiki in particolar here: http://wiki.spagobi.org/xwiki/bin/view/spagobi_server/data_set#HQueryDataSet28Mongo29
When connecting to mongoDB, you pass auth stuff in the url. Since the scripts lies on the client side, it would be hard to make the connection secure (unless you are talking about backend JavaScript). Anybody would be able to see how to connect to your DB and for instance delete all content.
I would suggest a simple api to interface the database. Then u control the access to what a user can do towards the database.
Or have I misunderstood the scenario?
I have a list of of data being return from a "standard" HttpGet IQueryable method from an ApiController that implements the Breeze EFContextProvider. When one of the objects references another object that has already been returned in the payload, Breeze gives me an $ref to refer to the object that was already returned.
I want the object with all related objects return explicitly, not a reference with $ref. Also, I'm not using the breeze.js library on the client side; simply making straight calls to the Controller with a web address.
I found this:
Breeze does not replace the Ref: node with its real data
which is the thing I'm looking for, but using Include on the server still doesn't return all of the data.
Any idea on how to "force" Breeze on the server side to include all related data no matter if it was returned and referenced in the payload?
Update 1
Per Steve's answer below I added the following to the BreezeWebApiConfig.RegisterBreezePreStart method in the App_Start folder:
var json = GlobalConfiguration.Configuration.Formatters.JsonFormatter;
json.SerializerSettings.PreserveReferencesHandling = PreserveReferencesHandling.Object;
Compiling and running produces the same output with only the $ref group instead of the full data. I'm sending a request to the server to $expand the collection. Do I need to change the SerializerSettings upon each request to the controller or will adding this to the BreezeWebApiConfig.RegisterBreezePreStart method be enough?
Update 2
I've added a CustomBreezeConfig class per the instructions at the link that Steve added in his answer. I am however using Breeze.WebApi2 so it appears that the BreezeConfig is actually in Breeze.ContextProvider. The code compiles, but I'm still seeing the same $ref for the actual object in JSON.
Do I need to include this CustomerBreezeConfig class in a specific place in my project for Breeze to use it's serializer settings?
Under WebAPI, Breeze uses the Json.NET serializer to turn the results to JSON. You can change the serializer settings (specifically the PreserveObjectReferences setting) to change this behavior.
Breeze configures it's own JSON serializer, so in a Breeze app, you'll need to configure it as described in the Breeze Web API Controller doc.
Note that, if you turn PreserveObjectReferences off, you might also need to configure the ReferenceLoopHandling setting, if you have circular reference in your object graphs (as most of us do).
I have a realtime data model with a lot of data in it. When I try to load it using the API call my onLoaded function does not get called. Similarly my error handling function does not get called even though one of the underlying API calls (https://drive.google.com/otservice/gs?id=...&access_token=...) receives a 409 response from the server.
My attempts to load smaller data models works fine. I am confident that I am using the API correctly since I started my code from the example provided on the Realtime API Quickstart page.
Google Drive has the concept of requesting a partial response using the fields parameter to reduce the amount of data returned. I can not see similar functionality for the realtime API. Does it exist?
Is there a way to download the realtime data model as a generic file so I can pre-populate my application with data until the Realtime API is completely loaded?
Data models greater than around 10 megs are not currently supported. It sounds like you might be running into this limit.
You should think about how you can reduce the amount of data you are storing. E.g., store large items like images outside of the Realtime model, or (based on what you said you were doing in your previous question) do some smoothing to reduce the number of points stored as it increases.
You can export the data model right now in the Javascript API: https://developers.google.com/drive/realtime/reference/gapi.drive.realtime.Document#gapi.drive.realtime.Document.prototype.exportDocument
But in order to do that, you have to load the document first.