Hi Team,
Right now we have all GraphQL requests as POST requests even for fetching data from magento. As only GET requests can be cached so we are trying to convert POST requests to GET ones. One of the drawback i have read out
query has to be sent as a URL query parameter since GET requests can't have bodies. This can be problematic with bigger queries since you can easily hit a 414 URI Too Long status on certain servers.
The best practice is to always utilize POST requests with a application/json
Content-Type.
Please suggest what are other drawbacks if we make them GET request?
Adobe core cif components also use POST requests but how we can implement caching then for fetch calls?
TIA
Solved! Go to Solution.
Views
Replies
Total Likes
Hi @HeenaMadan ,
To the best of my knowledge, there are several drawbacks to using GET requests for GraphQL queries, including:
Query size limitation: As you mentioned, GET requests can only include query parameters in the URL, which means that large queries can cause the URL to exceed the maximum allowed length, resulting in a 414 URI Too Long status.
Caching: GET requests are typically cached by the browser and intermediate proxies, which can lead to stale data being returned if the underlying data has changed.
State change: GET requests are not supposed to change the state of the server, that's why it's also called Idempotent. So it's considered a best practice to only use GET requests for fetching data and not for modifying data.
Regarding caching for GraphQL queries, one solution is to use caching at the application level. For example, you can use a library such as apollo-cache-control to implement client-side caching for GraphQL queries. This library allows you to specify caching rules for specific fields or queries, so you can cache the results of frequently-used queries to improve performance.
You can also use edge caching via CDN(Content Delivery Network) to cache the responses of your GraphQL requests.
Another approach is to use a GraphQL proxy service that supports caching, like AWS AppSync or Apollo Engine. These services handle caching at the server level and automatically expire and update the cache when there is a change in data
Regarding Adobe core cif component, they might be using POST requests in order to handle the data payload and complex queries, if caching is a concern, you can either consider the above approaches or you can try to fragment your queries into smaller and simpler ones that are suitable for caching.
In summary, it's generally best practice to use POST requests for GraphQL queries, but if you need to use GET requests, you can implement caching at the application level or via edge caching, but with the drawbacks of URL size limitation and ensuring that you don't change state.
Thanks,
Ravi Joshi
@HeenaMadan Can you please post the query, the POST version and the GET version of it
Also, please consider using ; for query parameters instead of ? in the GET request
Sample GET call
http://localhost:4505/graphql/execute.json/test-aem-caas/person-by-path;path1=/content/dam/digi-cont...
Sample GET call with URL encoding
Hi @HeenaMadan ,
To the best of my knowledge, there are several drawbacks to using GET requests for GraphQL queries, including:
Query size limitation: As you mentioned, GET requests can only include query parameters in the URL, which means that large queries can cause the URL to exceed the maximum allowed length, resulting in a 414 URI Too Long status.
Caching: GET requests are typically cached by the browser and intermediate proxies, which can lead to stale data being returned if the underlying data has changed.
State change: GET requests are not supposed to change the state of the server, that's why it's also called Idempotent. So it's considered a best practice to only use GET requests for fetching data and not for modifying data.
Regarding caching for GraphQL queries, one solution is to use caching at the application level. For example, you can use a library such as apollo-cache-control to implement client-side caching for GraphQL queries. This library allows you to specify caching rules for specific fields or queries, so you can cache the results of frequently-used queries to improve performance.
You can also use edge caching via CDN(Content Delivery Network) to cache the responses of your GraphQL requests.
Another approach is to use a GraphQL proxy service that supports caching, like AWS AppSync or Apollo Engine. These services handle caching at the server level and automatically expire and update the cache when there is a change in data
Regarding Adobe core cif component, they might be using POST requests in order to handle the data payload and complex queries, if caching is a concern, you can either consider the above approaches or you can try to fragment your queries into smaller and simpler ones that are suitable for caching.
In summary, it's generally best practice to use POST requests for GraphQL queries, but if you need to use GET requests, you can implement caching at the application level or via edge caching, but with the drawbacks of URL size limitation and ensuring that you don't change state.
Thanks,
Ravi Joshi
Thanks @Ravi_Joshi for the information. That's what i am thinking. But GET requests we can cache at magento fastly CDN but Why cif core OOTB components don't use GET( url becomes too long and http error would be there) and each time they call Magento to fetch data.
For POST additional app level cache we need to implement.
Views
Likes
Replies