Wednesday, August 20, 2008

BlazeDS does not make use of the HTTP caching infrastructure

There were several comments to my post about my doubts about BlazeDS scaling well.

James Ward
said...


Hi Markus,

You have to also consider that RIAs in Flex are
architected very differently than typical web applications. In a
typical Flex application most requests to BlazeDS's servlet handler
will be to either get data (which is then usually held in memory on the
client until the user closes the page) or to update data. Most of the
time when these operations are performed the response will be different
so caching doesn't provide much. If a developer decides that something
should be cached they can easily store that data in a Local Shared
Object (a big, binary cookie in Flash) - or if the app is using AIR
then it can save the data in the local SQLite DB. There are also
emerging open source frameworks that assist in handling this caching.

-James (Adobe)

Thanks James for responding. Yes, I understand that Flex comes kind of from a different angle. A lot of the early Flex applications might have been used to show "real time" data, such as Stock tickers.
And yes with Flex you probably usually only get data from the server, but the same is true for modern web applications. GWT uses a similiar approach for remoting. PURE is a javascript framework that also only sends data. Both GWT and PURE will work well within the Web infrastructure( Web caching proxies for example), because the server can set how long the data should be valid. Actually this meta data about lifetime is send together with the response.
I don't see how I can do the same with BlazeDS.

Yes of course I can build a cache on the Flex side. But that is something that I would have to do in addition and also the life time of the objects in this cache could probably not be controlled by the server.

I therefore still believe that there's room for improvement for BlazeDS.

In a similiar response Stephen Beattie
said...

Hmmm. With a 'stateful' Flash/Flex front-end, there will be less
requests made to the server as only the data is generally requested
once the interface SWF has been downloaded. For the sort of application
where the data is changing frequently, not caching makes sense to me.
Besides you can always implement a level of caching on the server-side
to prepare the data. I fail to see how this affects scalability. If
your data isn't going to change then there's no real need for BlazeDS.
Just load it as gzipped XML or something you can server up with a cache
HTTP header.


Basically he says that BlazeDS is for "real time" data only. IMHO that is a major limitation, because I can't see a technical reason why BlazeDS could not support the HTTP caching infrastructure.

Anonymous
said...

[snip] I wouldn't want my transport to arbitrarily
decide what data to cache and what not to. I would want to build and
control that caching code myself. Not doing that can break
transactional isolation in an application.

[snip]

I want to be able to control from my server how long the data that was just send is valid, because the server for example might now that the data will only be updated once a day.

12 comments:

Unknown said...

Check my new post for answers to your comments:

http://kohlerm.blogspot.com/2008/08/blazeds-does-not-make-use-of-http.html

Anonymous said...

i dont think you understand.

if your server knows the data will only be updated once per day then it can create a static file that it recreates when it is outdated, and apache/iis can deal with caching it, or you can create a cache using AOP in your bean. Or you can add a little intelligence to your flash app, since it, and not the browser will be asking for the data.

This is like a rpc/rmi call, not a typical page view.

I think I would be fairly upset if my AS ever tried to cache the results of a EJB call.

Unknown said...

Hi tristan,
My point is not how the server would cache the data, but that because the server knows that the data is not going to change until the next day (for example), it can tell the client when sending the data, that this data is not going to change and therefore the client doesn't need to ask(until the next day).

I know that it is supposed to be RPC, but I don't get why it should not be cachable anyway.

Regards,
Markus

Anonymous said...

My point is that if you need http caching your application is horibly broken.

Flex/flash, or even java applets, do not use the traditional browser cache, so storing data there (http cache) is of no use. if you need to persist data across sessions then the best use would be AIR's SQLlite db.

If you have a datafile that you think should be cached at a http level, ie at the isp for all of the isp's customers, then publish that file using apache/iis and let http do its thing.

blazeDS is for providing a backend to flex. this is like ajax on steroids. if your app decides it needs data, it needs data, if it gets it from a static file on a apache/iis webserver, or if it gets it from invoking a method/service on a ejb/webservice doesnt matter. if your app doesnt have the brains to determine if it already has the data...well im afraid your code wont scale.

if you are using a dynamic method to serve static content than you are doing it wrong. having http caching will not help you.

Unknown said...

Hi Tristan,
1. Flex does use the browser cache as long as you directly send HTTPRequest. So you can build a Flex Application against a REST-style webservice and caching will still work.

2. Caching is not an all or nothing solution.
You usually cache for a given amount of time.
You say "if your app doesn't have the brains to determine if it already has the data". Of course the app knows that, but if it doesn't know (without additional effort) how old the data is and how long the server "thinks" the data will be valid, it doesn't have many choices.
I guess you don't want to suggest me that it should cache the data forever?

Regards,
Markus

Ariel Scarpinelli said...

I'm running on the same kind of "trouble". In my case I'm using AMFPHP instead of BlazeDS but for the purpouse is the same... as long as the "method call parameters" go by POST your browser won't cache the response.

So the "solution" is simple: If your application is well divided you will have some sort of Model package where you will store responses from the queries to the backend; as well a Business or in the same Model package a set of classes for doing the actual RPC. For every result set that you know is cacheable you simply change the Service request for an appropiated cache file HTTPService request (/datacache/myMethodResponseCache.xml for example). In the other side when data gets updated you recreate the cache file.

It might seem more work but has a few advantages:

- You might not (actually you almost never) know when exactly the data will change, so setting a expiring datetime value is almost useless. As the cache file gets regenerated on data update you avoid the problem. HTTP avoids the redownloading by an If-modified-since header.

- You completely avoid running the backend application in the server as you are just requesting for a static file. And also you are avoiding the make of a query in the database. That helps a lot with the scalability.

The drawback is that the idea works only on fixed queries; but if the amount of data is relativelly small you can just cache all and then apply filters inside the application.

Unknown said...

Hi Ariel,
To all others:
Sorry the comment form stopped working (at least for me). I changed it to use a popup window, which works for me.

@Ariel Thanks for the detailed explanation!
Yes I know that there are workarounds to this problem.
These workarounds might be acceptable for some applications and not acceptable for other applications.

Still IMHO the problem is that those workaround don't play well with the Web/Internet infrastructure.

Your "emulation" of the
If-modified-since behavior makes sense to me, but still it would not work well together with Web Proxies. To work well with web proxies BlazeDS would have to use GET's and would have to set the "If-modified-since" header and then transparently return cached data in case of an HTTP 304 response.

At a first glance, I agree that the "expires" caching strategy might not be that useful, but there are certainly examples where it is useful. To scale well you have make comprises and not all types of data need to be shown in real time.
"Real time" is anyway relative. If you click on something that will bring up a table with data and you look at the data, it might have changed anyway in the meantime. So you have to admit that your data is never real time anyway.

Anonymous said...

You have a valid argument that Blaze won't HTTP caching. However:
1. How much data can you possibly be wanting to cache on the client that Blaze won't scale? You have to be using a lot!
2. Using a shared object is about 3 lines of code.
3. In the mx:RemoteObject tag you can specify a custom url. If you call a custom servlet you can probably configure it to cache via HTTP

Unknown said...

Hi Brian,
Thanks for your comment!
I don't have to cache that much to be able scale better. For example (I'm making this up) If your application server can handle 100 requests per second, which would be enough to handle 1000 users and you can save 2 out of 10 requests, assuming that all requests cost the same and you don't run into other limitations you could handle 1250 users.
Agreed shared objects could be used for caching, but the point is that the initial information what should be cached should come from the server.

Option 3 sounds interesting and I should need to check it,but at the moment I'm investigating alternatives to blazeds,for different reasons.

Anonymous said...

Hi Markus,

With Blaze, the way to avoid those issues you're discussing here, would in my opinion be to use a state of the world call on startup of the flex application to retrieve the initial data, and then rely on Blaze's push service to update the data when it's stale. This way, the server can determine when the data needs to be updated, and the client only have to worry about the initial retrieval at startup.

Unknown said...

Hi Espen,
Yes push could somehow help, but it's only really a workaround. It still would not benefit from HTTP proxies, it requires special server support to have acceptable resource consumption (Comet support). The server would also have to know whether the client is still interested in updates for the particuliar data.

redben said...

So BlazeDS can't benefit for http caching. In one same app, why not use BlazeDS for requests where cache is not to be used and simple httprequest/xml for cachable data ?