Our Eloquent models have attributes following the Laravel snake case convention.
e.g. first_name, last_name and created_at
Although my frontend (react) follows the javascript camel case standard.
e.g. firstName, lastName and createdAt
Is there a simple way to convert ALL attributes to camel case when sending an API response?
We are using Larave 5.2 and the Dingo API package.
UPDATE
Following on from the accepted answer I used the Custom Response Format approach. See the following gist for the implementation (includes unit tests):
https://gist.github.com/andrewmclagan/c5e0fe601d23f3b859b89a9f8922be68
You really have a few options. I won't go into implementing them (unless needed), but here are a few I can think of:
In Laravel:
Override the toArray() method on the model. When a model is converted to JSON, it calls toArray(). You can use this method to go through and convert all the keys to camelCase. You would need to override this on every model, though, but that can be abstracted through a parent class or a trait.
In Dingo:
Use Transformers with the Response Builder. You could create a transformer for every model, or you could create one CamelTransformer and register it with every model. You would convert the keys to camelCase in the transformer.
Create a Custom Response Format. You can create a camelJson response format that extends the default json format. Override the necessary methods with the logic to camelCase the keys.
Use the ResponseWasMorphed event. You can create an event handler for the ResponseWasMorphed event, and go through and camelCase your keys in there.
Any one of these options should be able to do the job, it's just a matter of how global or granular you want these transformations to be. For example, modifying toArray() will affect all code that converts your model to an array (or json), not just your response code, so you may not want your change to be that global.
I would think that the Custom Response Format is probably the best combination of ease and appropriateness for your situation.
Don't do this. The server doesn't need to know anything about its clients. If your React application needs to handle properties in camel case, delegate that task to the client. They should have on point in the system where all requests pass through. That's the correct place where the client, just after receiving the response must transform it.
I solved by following below :
In Your controller class
return response()->json($model);
// => ['userName' => 'foo', 'userKey' => 'bar', ...]
Requirements
Laravel 5+
Install
$ composer require grohiro/laravel-camelcase-json:~1.0
# Laravel 5.7+
$ composer require grohiro/laravel-camelcase-json:~2.0
Add the service provider.
config/app.php
'provider' => [
// default providers
// ...
Grohiro\LaravelCamelCaseJson\CamelCaseJsonResponseServiceProvider::class,
],
Reference : https://github.com/grohiro/laravel-camelcase-json
Related
I'm defining a custom input field on type-graphql of type JSON. We're using Prisma as well. I tried with Prisma.JsonValue, Prisma.JsonObject and JSON but I get this errors. any suggestion will be welcome
You can't directly use Prisma-generated models as type definitions in TypeGraphQL. You will need to create custom classes as shown in the TypeGraphQL docs (with #ObjectType(), #InputType() etc decorators).
There is a third party library for generating TypeGraphQL types from Prisma called typegraphql-prisma which you could consider. However, in my opinion it's easier to define the classes manually.
Furthermore, I'm not sure what you mean by JsonValue. If you need to pass arbitrary JSON data, perhaps you could stringify your JSON object and pass it as a String?
Let me preface this question by saying that I am fairly new to typescript, but a long time JavaScript developer.
I have an existing JavaScript application that I am looking to transition over to typescript. In the JavaScript app, it makes a rest API call to fetch data, then checks to see if it exists, and then conditionally renders the app. In JavaScript, I can dynamically check to see if Properties exist on the response object and conditionally render that. In typescript, it throws an error because the data is of any type, and it doesn’t know if that property exists or not.
In a typescript application, is it pretty common to go create types for all of your API responses, so that you can get type safety on them? Using node as your backend, I could see a huge opportunity where you might be able to share backend and front end models. I am currently using .net core for my backend, and I am concerned I might be shooting myself in the foot trying to always create typescript models from the entity framework models.
Does anyone else use .net core for the backend and react on the front end? How do you handle API responses in typescript?
I can't tell you if it's common, but we write json-schema files for every endpoint. These schema files are:
Used to validate request bodies and generate errors.
Used to generate documentation.
Converted in to typescript types, which are used in both front- and backend.
We are using the same stack as you--.Net Core backend/React frontend--with typescript. The way we handle it is by creating types for the objects the backend sends us and then converting the dynamic checkers, like the ones you mention, into user-defined type guards.
So, roughly speaking, for the various data transfer objects the server returns we have a type/type guard pair that looks like this:
// type
export type SomeDto = { someKey: string; someOtherKey: number }
// type guard
export const isSomeDto = (returnedObj: any): returnedObj is SomeDto =>
returnedObject.someKey && typeof returnedObj === "string"
returnedObject.someOtherKey && tyepeof returnedObj === "number"
Then we have basically have the following in the fetching code:
const someReturn = fetchDatafromApi(endpoint)
if isSomeDto(someReturn) {
//now typescript will recognize someReturn as SomeDto
} else {
// throw or otherwise handle the fact you have bad data from the
// server
}
This way you get the dynamic checking in javascript at runtime and type safety at compile time. These guards aren't super-fun to write (and you'll want to unit test them all), and I imagine if the possible number of objects was bigger we'd opt for a more automated solution like #Evert mentions in the other answer. But for a few objects, and existing javascript validators, it is a pretty convenient way to get typing on your API returns.
I want to make react-native app multilingual I use react-native-localization library and i use for JSON formats to store translations.
my question is it possible to change JSON strings dynamically for example by adding new language or changing the translation already exist
The feature you can use is Code Push here you will be updating the source JS files. if you use a dynamic JSON you will have to request it every time but using codepush you can update the source JSON itself.
More info
https://github.com/Microsoft/code-push/
You can use RN Localization setContent method, after using local strings. Check the docs:
Update / Overwrite Locale
You might have default localized in the
build but then download the latest localization strings from a server.
Use setContent to overwrite the whole object. NOTE that this will
remove all other localizations if used.
strings.setContent({
en:{
how:"How do you want your egg todajsie?",
boiledEgg:"Boiled eggsie",
softBoiledEgg:"Soft-boiled egg",
choice:"How to choose the egg"
}
})
My end goal is to read any JSON schema and represent it in a tree(HTML).
For this I need a method to parse JSON schema (right?). I went through the implementations in this page and this editor which outputs an html form from JSON schema.
What I am asking is whether there is any optimum open source solution I can use or is my approach wrong?
Is there a way to get a list of properties along with their attributes?
You can use Ajv with custom keywords to create a JSON data processor/parser (JSON Schema will be used as data in your case).
You will need to define a schema with custom keywords that would be used to process your schema and generate/collect any side effects you need in the validation context (you'll need to pass this context to validation function with call/apply method and use passContext option so it is passed to subschemas and custom keywords).
This approach is used in JSONScript evaluation schema to evaluate script (but instead of script you would pass your schema as data).
I'm trying to wrap my head around Node.js streams, not that I'm pretty new to JavaScript and node, the last languages I really got were Perl and PHP :-D
I've read the Buffer/Streams documentation # nodejs.org, watched James Halliday # LXJS, read his stream-handbook and Thorsten Lorenz event-stream post. I start to understand the basics :)
I process data which is serialized in RDF (which is neither JSON nor XML). I manage to fetch the data (in real code via request) and parse it into a JS object using rdfstore module.
So far I do this:
s.createReadStream('myRDFdata.ttl').pipe(serialize()).pipe(process.stdout);
Where serialize()does the job of parsing an serializing the code at the same time right now. I use through module to interface to the stream.
Now I have some more methods (not the real function declaration but I hope you get the point):
getRecipe(parsedRDF) -> takes the parsed RDF (as a JavaScript object) and tells me how to use it
createMeal(parsedRDF, recipe) -> takes the parsed RDF and the recipe from above and creates a new RDF object out of it
this new object needs to get serialized and sent to the browser
(In the real world getRecipe will have to do a user interaction in the browser)
I like the idea of chaining this together via pipes for higher flexibility when I enhance the code later. But I don't want to serialize it to a RDF serialization every time but just send around the JS object. From what I've read in the documentation I could use the stringify module to get a string out of each step for piping it to the next step. But:
does this actually make sense? In terms of do I add unnecessary overhead or is this negligible?
I don't see how I could give the parsedRDF to both methods with the dependency that getRecipe would have to be called first and the output is input for createMeal as well. Are there modules which help me on that?
It might be that I have to ask the user for the final recipe selection so I might need to send stuff to the browser there to get the final answer. Can I do something like this over sockets while the pipe is "waiting"?
I hope this shows what I'm trying to do, if not I will try to give more details/rephrase.
Update: After sleeping over it I figured out some more things:
It probably doesn't make sense to serialize a format like RDF into something non-standard if there are official serialization formats. So instead of using stringify I will simply pass an official RDF serialization between the steps
This does imply that I parse/serialize the objects in each step and this surely does add overhead. Question is do I care? I could extend the RDF module I use to parse from stream and serialize into one
I can solve the problem with the dependency between getRecipe and createMeal by simply adding some information from getRecipe to parseRDF, this can be done very easily with RDF without breaking the original data model. But I would still be interested to know if I could handle dependencies like this with pipes
yes, It's okay to make a stream of js objects,
you just have to remember to pipe it through something that will serialize the stream again after before writing it to IO.
I'd recomend writing a module called rdfStream that parses and serializes rdf, you would use it like this
var rdf = require('rdf-stream')
fs.createReadStream(file) //get a text stream
.pipe(rdf.parse()) //turn it into objects
.pipe(transform) //optional, do something with the objects
.pipe(rdf.stringify()) //turn back into text
.pipe(process.stdout) //write to IO.
and it could also be used by other people working with rdf in node, awesome!