TPR5: Providing More and Using Less with Caching

Jason Fish 
Lead Web Application Programmer, Purdue University


The audio for this podcast can be downloaded at


Jason Fish: All right. As they said, my name is Jason Fish. I work for Purdue University, ITAP, Information Technology at Purdue. My name is on Twitter, @jasondfish. I would recommend everybody to send me things as they need to throughout the presentation. If you have questions, comments, suggestions, you think I'm completely wrong, throw it up there, put the hashtag on there, let me know. I'm open for suggestions.

Has anybody ever seen this page when they go to Digg, click on a link, seeing server's unavailable? Nobody? Yeah? Yeah, exactly. There could be many reasons for this. Somebody's got a great joke out there, a great picture, and everybody wants to go see it, it makes to the homepage at Digg... well, the old Digg, I guess. Not the new Digg anymore. Nobody goes to it, right?

[Laughter]

Jason Fish: So they go there and they get their hundreds of thousands of people viewing their pages, and they get nothing because they weren't set up to handle these users. This is what caching is trying to help. So here we go.

 01:00

Why cache? Why use caching? Does everybody know what caching is? Anybody who doesn't know what caching is? Why don't we start there first. Nobody? OK. I'll continue, then.

It helps with your performance, your scalability as a reduced cost of ownership, which I'll talk about in a little bit, moves the interactions closer to the user, and it simplifies complex processes.

So here you've got your multi-tiered environment, like everyone has. You've got your happy user, you've got your Web pages, and you've got your database. Your user requests a page... let's say it's your homepage. It gets data from the database, sends it back to the page, the page gets processed, sends the HTML back to the user. All right. Perfect. The user gets what he wants.

We've got User 2 coming in, same thing: requests the homepage, the homepage asks for the data, that data gets sent back, and it gets sent back to the user. Already we can see the same things happening again and again and again. User 3 comes in, same thing: asks for the same exact page, requests the same data from the database, sends the same response back to the page, processes that page, it sends it back to the user.

 02:05

We're still all right. Three users, one coming in right after the other, not too big of a deal. We have all those interactions over there from the page to the database, but no big deal.

What happens when you have your five users coming in all at the same time asking for the same page? That means five requests get sent to the database. Five responses get sent back to the page. That page gets processed five more times, and it gets sent back to the users. So far we're at eight database calls for the exact same data. It's really not too bad. Eight database calls.

What happens when you're getting 80 at a time? 800 at a time? Or 8,000 at a time? That interaction between there is going to get really messy. So what we're trying to do is make one database call for the data that everybody needs, reduce that interaction between your Web server and your database server.

Some general caching rules. And these are very general. If it's used more than once, it can probably be cached. And if it's not specific to a user, it can also be cached.

 03:08

Now I don't mean user data; I mean specific to the user. If I go to page 1 and you go to page 1, and we see the same thing, that's not specific. If I go to page 1 and see different content because I'm me, and you see different content because you're you, that's specific to me, and that might not be a good idea to cache those types of pages. You don't really get the same benefits.

You can overdo caching, so we'll talk a little bit later about where to start and how not to overdo it. You want to support failover. Caching is something extra you're adding into your system that helps make it a better site. It's not something you add in first, so you need to be able to support failover.

Anybody notice Facebook being down probably a few weeks ago or a month ago for hours and hours and hours that never happens? Well, that had something to do with caching. It wasn't purposely the caching server's fault, but it had to do with that, and they didn't support failover very well.

 04:06

By also supporting failover, you're able to use cheaper and less reliable machines for your caching servers, which, as I talked about earlier, you can reduce the cost of ownership.

Test, test, test. Are we all developers in here? Anybody not a developer? All right. Well, when you talk to your developers and you ask them to implement caching, it's very important for them to test. Since they're not making that complete roundtrip back to the database processing that page again, again you're sending it to the user, being able to test to make sure their user is still getting the correct data is very important.

The technology stack we use within my group, it's not really important, but I'm going to show some code examples. We use ASP .NET MVC 2.0 and SQL Server 2008 R2. If you're not using these, it doesn't mean it doesn't apply to you. You can use the general rules. But I am going to show some code examples and wanted you to make sure that you knew what I was talking about.

 05:01

The first thing I want to talk about is output caching. Sometimes it's called page caching. This is when you cache the actual results of the page. So it's already made the call to the database, brought the data back, processed that page... maybe there's not much of complex processes that are going on... and sending that back to the user. But before you send it back to the user, you cache it locally, so the next time somebody asks for that page, you don't have to do any of the processes. You don't have to go get the data; it's already there to send right back to the user. Page output, there it is.

Great candidates for this would be, again, not specific user pages like a landing page or an FAQ page. A landing page is something you have a large amount of users that are going to see. There could be data that's getting from there or processes that are going on. And if it's getting hit over and over and over by a lot of different users, then cache those results, put them in memory so make it quicker for everyone to get to and see. Frequently Asked Questions, again, that data's rarely going to change. Your Frequently Asked Questions don't change on a minute-to-minute basis. You usually have 10 Frequently Asked Questions and then it keeps the same for a week, a month, a year, and then that gets updated. So a good candidate for output caching.

 06:14

A subsection of this output caching is what's called partial page caching. This is when you don't cache the entire result page but only specific parts.

So think about a forum. If I've made 10 posts on there, over there on the left of each one of my posts is my name, my picture, when I first started the forum, how many posts I have total. Instead of processing that each time that you're asking for that forum page, cache that piece of the page and serve that up, and it makes the page load faster for the user.

In .NET, you have a number of... Can you see the little red dot down there, anybody? Yeah? Kind of? OK. Oh, the thumbs up in the back, very nice.

There's a number of parameters that you can specify. The two that I want to talk about are the duration and the VaryByParam.

 07:06

The duration, as you can imagine, it's required. How long is it going to be in the cache? It's specified in seconds, but it's not guaranteed to be in cache for that entire period of time. Can anybody guess why? Something you want to stay in there the entire time that's specified?

Audience 1: It fills up.

Jason Fish: Yes! The cache fills up. So every system has a maximum amount of space to put stuff in. When that space fills up, the older stuff gets pushed out. So even if you specify that it's going to be in there for a year, if the cache fills up, it's going to push that out until it's called again.

In MVC, you have your method in the controller. This one's just passing back one query result set, but think about a page that had 10, 12, 15 different query results there being passed in the page. It might be a little bit better, but I couldn't fit that all on the slide.

 08:05

You had your output cache in there and your duration. Six hundred. So how long is this going to stay in cache? Ten minutes, there we go. You get a thumbs up plus. Easy enough. See? With just one little line you added in there, and .NET takes care of it for you. Other languages also do some similar things. But I've added one line in there, and that page is going to load faster for every user that comes to my site.

The VaryByParam, it's also required, but can be set to none. As we saw on the previous slide, I set the query param to none because there is no param for that method.

It creates a different cached version for each value of the parameter. So if you pass in a parameter value of 1, it creates one cache in memory. If you pass it in 2, another one, 3, another one, and so on and so forth. You can specify multiple parameters very easily. So if your method has 10 different parameters, you can specify all 10 in there.

 09:06

What that does is the combinations that you might have in your cache, it might get so specific that it's no longer useful to be caching that. So as the number of your parameters grow, it becomes a less likely candidate for caching. And it results in many more items in your cache. So the more parameters you have, the quicker your cache is going to fill up, and the quicker it's going to push out stuff that you might need.

Right here we have the VaryByParam as ID. Why did I use ID right there? Anybody?

Audience 2: That changes.

Jason Fish: Because that's the part that changes, yes. That's our parameter in our actual method right here. So that's where you get the names. They're not just made up, they're not some standard. It does make sense here. Anybody know how long this one stays in memory?

 10:02

Audience 3: Until it gets kicked out.

Jason Fish: Until it gets kicked out, yes. We have the max value for an integer here. I don't remember what it is, but it's something really large. So it's pretty much going to stay in memory until it gets kicked out. Thank you.

We can do that for this particular method because for us, posts can't get edited and posts can't get deleted. Once somebody makes a post, it's there forever. So we can put in that max value because it will always be there until it gets kicked out.

Data caching. This is a piece that we've really used and really dove into and used wholeheartedly within our team. This is what we've really found the most useful in our applications because using object-oriented language, that data can get called from multiple pages. So even if I've got the page cached, that doesn't mean necessarily that the query's cached. Does that make sense to people?

 11:08

And also, most performance bottlenecks in any system under heavy load are occurring at the database level, not at the Web server level or any other level. It's usually, and I would say really, really 99% of the time, it's happening because of your database.

So you do exactly that. You get the query from the database and you put that into memory. You don't put the whole page results in the memory. You just put the results from that query in memory.

Good candidates for data caching are your most-used queries and also your most expensive queries. Does anyone know what I mean by 'most expensive' queries?

Yes. The queries that take the most time in your database.

So let's say you have a query that takes five or 10 seconds. Beyond the fact that you have a query that takes five to 10 seconds to run, once a user hits that page or a page that needs that data, instead of waiting that five to 10 seconds every time to hit that page to run that query, it's automatically in memory, and you don't have to worry about it.

 12:10

Here's a very simple method that goes and gets a person by their ID. And to add caching... we want to add caching to it. In caching, .NET has a whole system set up for it and there's an insert method just like there was on the output cache.

The three parameters that I want to talk about are the key, or what you're calling it in cache, so it's pretty much a name value pair. In this space in memory, you have names and you have the values that are in those names. So you specify a key, you specify the value. And the other one I want to talk about is the expiration date.

The first thing you do when you want to cache is you specify where you're getting it from. So our name right here, Person_GetPersonByID, and then the parameter that I'm sending in.

 13:03

Naming of the key is very important because if I have the name in two different methods of the same name, then you're going to be getting the same results and you're going to be really confusing your user or getting back the wrong object types. It's really problematic.

So what we've done is we've named it by the ObjectName_MethodName, and then any parameters that we're passing it in. That works for us really well. You have to find your own method that works within your development team and stick with it. It's something that you really need to discuss to make sure everybody understands what you're doing before you move forward.

Right here we're trying to get this out of cache. If there's nothing in there, then it returns to null. If there's something in there, it returns an object of some type.

So if it is null, we need to go get it from the database. After we get it from the database, we then insert it into .NET's cache. We specify again the same name. You'll see up here... the actual object that's being put into cache. Null. This is for dependencies; we're going to talk about that in a little bit. How long it's going to be in the cache, 10 minutes. And I've specified here that we're not using sliding expiration.

 14:22

Sliding expiration is if I go to a page that needs this data at three o'clock, and I have sliding expiration turned off, then because of a specified 10 minutes here, it will be removed from the cache at 3:10. Sliding expiration is if I went there at three o'clock and someone went there at 3:08, the new expiration is 3:18. So you have to look at your data and see if it makes sense to use sliding expiration or not.

And then, just as before, we return that person. But because we don't know what's being returned, it's just the plain objects, we have to tell it that it's a person.

Cache removal, also very simple in .NET and very simple in many other languages. You just specify that key that we were using earlier.

 15:08

This needs to be added everywhere where that data can be changed. Anywhere within your application that that person object can be changed, you need to make sure you're removing it from the cache. So when you edit methods, delete methods, if you have edit methods that change multiple people, you need to make sure that that's in there. And this is really the hardest part to get right, which is why testing is really critical.

There's two different methodologies as far as data caching: a push methodology and a pull methodology.

Push methodology invalidates the cache. So anywhere when you edit, delete functions, you immediately delete it from the cache and then put information back into the cache. This reduces the time for the first hit, so the next time that data is called, you can get it right from memory instead of calling it from the database.

This methodology also means that there's more information in the cache than is needed. When you immediately re-populate that cache after something's edited, you don't necessarily know if someone's going to call that page that needs that data. Again, anytime soon, even though you're putting the stuff in the cache, it might push other more valuable information out.

 16:18

The pull methodology, on the other hand, invalidates the cache and then leaves it empty. This makes for a longer first request. So the next time someone goes there, they have to actually hit the database to get the data back. But it does mean that the hottest items, the most-used items are the ones that are in the cache. And it's not actually in the cache until it's accessed.

So I ask you guys: which is better, push or pull?

Audience 5: It depends.

Jason Fish: Depends? What does it depend on?

Audience 5: Metric?

Jason Fish: OK. You're right. You're right. It does depend. It depends on your application. It depends on how you're using that data.

If it's something that you know is immediately going to be looked at again, you might as well put it back into the cache so it can be accessed right away quickly. If it's something where you don't know if it's going to be accessed again anytime soon, use pull methodology and let that first request use the time.

 17:16

Any questions? That make sense?

All right. Just for a second here, think about your environment, or think about an application that you have that you think you might take advantage of some caching methods.

This is something that we didn't do when we first started our implementation of caching, and it came back to burn us. Our Web environment is clustered, so we have two Web servers that run our applications simultaneously. What this means is the cache is saved on both servers. So whenever somebody asks for the page on Node A, it gets saved. When they ask for it on Node B, it gets saved again. That means twice as much space as needed for the cache.

The validity of the data is also unknown. So let's say I call on Node A. That cache is in there, I can get it right there, and it's also... I can't use both hands here... it's also in Node A and Node B.

 18:12

So I go to Node A, I make my update, and I'm using pull methodology, and it deletes it from this cache. A user can still go to Node B, and they'll still be in the cache and they can still use it. That's problematic, right? You don't want to get emails saying people see deleted data, people see their data that they just edited that hasn't taken on the edits yet. You don't like getting those phone calls.

But it shouldn't stop you. There is an answer: memcache. Anybody heard of memcache? Yeah? It's free. Who doesn't like free? We don't have any money in our budgets, but free, you can do free.

It's open source. There's a huge community behind it. It's distributed, so that means if you're just using one server or a hundred servers, it can run all of them just the same. It doesn't care. And there's libraries available for almost any language you can think of: PHP, Java, Ruby, and what we're using, .NET.

 19:08

What it's really doing is taking your individual Web servers, cache, it's separate on Node A and Node B, and it's making them into the same cache pool. And then cache takes care of it. You don't have to really think about it at all.

So as before, in this example, there is 64 megabytes of space on each Web server. Now you have 128 megabytes of space. So you've already doubled your cache space and you've gotten rid of your validity of your cache data problem just by using this free tool.

And you might be thinking, 'Well, it's free, it's open source. That means it's crappy.' Well, in this case, it doesn't. Twitter uses it. YouTube uses it. Wikipedia uses it. Digg uses it, although that's probably not relevant anymore, like I said. WordPress uses it, and Flickr uses it. These are large, million-of-user communities with hundreds of thousands, if not millions, of pieces of content, and they're using it to deliver their content to users. I think it's OK for our shop to use for our campus, right?

[Laughter]

 20:17

John Fish: So memcache. You'll see that this is the implementation of it. Looks almost exactly like .NET. So with the libraries that are available, they've made it almost seamless to transition into.

You first specify the key, just like we did before. You see if it's in cache or not. If it's not, then you go get it from the database and insert it in. Here's the key, here's the object, and here's the time span. It's that easy.

You don't have to do some huge change. You have to install it on your servers, you have to tell it which servers you use in your Web config, and then you implement it. It's that simple. It's also just the same for removal. It takes the key and it deletes it.

 21:08

This is something that we found to be very useful because it was simple, it didn't cost anything, and it took care of our problem. We didn't have to check which server was the cache being saved on, is it right, is it not right, could we only do certain things. Memcache took care of all of that for us.

The first project we used this on is something called Hotseat. It allows for back-channel communication in the classroom. We have large lecture halls at Purdue, 300, 400, 500 students, and sometimes it's hard for those students to raise their hands. The professor doesn't see them or they're too embarrassed to ask their question.

Well, what we did here is build a tool that allows them to ask their questions, and they can even ask anonymously so they don't have to get embarrassed about their question, and the teacher can go on there a few times during class and say, 'What are the most popular questions?' or 'Is there a really engaging question that I should really talk more about because this is a good question?'

 22:11

But what this means is we're refreshing this page that the students are looking at every five seconds to make sure that what the students are looking at is fresh and they don't have to keep going up to their toolbar and hit refresh every five seconds to make sure they have their data.

What that means, though, is that we have a classroom of 400 students refreshing every five seconds. That's 4,800 database requests a minute. That's roughly 300,000 database requests in the hour that they're sitting there in lecture. As you can imagine, that started to really take a toll on our database.

So this is when we implemented memcache. We went from, as you'll see, during an hour lecture, 295,000 database requests when we didn't have any caching, to under 9,000 requests during that same hour when we turned caching on.

 23:04

We realized almost 97% savings on our database whenever we implemented it. Just this. And if you can see this, we implemented only on 10 queries. We didn't implement it on every query throughout the application.

We watched our logs, saw where it needed to be added, and we implemented on those 10 queries. And like I said, almost 97% of traffic to our database was removed, the pages went fast, and nobody even noticed that there were so many people on the site anymore.

After a good successful implementation on one project, you try to implement it again. So we built this thing called Mixable over the summer. And what it does is instead of... like Hotseat is professor-driven where they pose a question and students post their questions each lecture, Mixable is more student-driven.

 24:01

So they log in, it gives them their classes, and they can post files with Dropbox, they can ask questions and other students will answer and comment, and they can post files and links and YouTube videos, and it will all be right here within the system.

And as you can see, there's a lot of queries on any given page: a query to get their classes, a query to get their groups, to get their lists, to get all the posts, images, videos, links, files, podcasts, their user data. There's a lot of queries on any one given page. So, again, perfect opportunity to use some caching.

For this implementation, we used LoadStorm, which is a load-testing software. It's really easy to use. I don't know if I can recommend it, but I would recommend it.

[Laughter]

Jason Fish: Yeah, you can't endorse things. I don't know how that works.

What we did is we had a hundred-user test that took 30 minutes. It built up from one user to a hundred users stepped up during each time span, and as you'll see we have the green bars are when we didn't have any caching, the red bars when we had only our top query cached, and the blue bars when we had our top 6.

 25:12

Our total database calls got cut in half. We didn't have any more errors implementing caching. The request time went to a fraction of the time. But yet our total requests during that 30-minute test went up three times, as well as our request per second.

What that really means is that even though we had three times more requests during our test, we had 50% of the database queries called. So think about that for a second. You were able to support three times as many requests to your site, and there was less strain on your database.

We went to zero timeout here is when we did the initial test without any caching. After about 35 simultaneous users hitting your site, we started to see timeout errors in this application, whereas when we implemented caching on just six queries, that went to no timeout errors at all. You don't want your users sitting there for 30 seconds while the page loads and then all of a sudden get nothing back, right? Nobody wants to see that.

 26:24

The page load time went down from 2.24 seconds to 0.35 seconds. And you might look at that and say, "Well, two seconds isn't really that long to wait." But when you're going from page and page and doing different things and trying to get something done, two seconds is a long time to wait for a Web page to load. So we cut it down to under half a second.

And that's with more requests per second. We went from nine requests per second to 30. So even though the requests went up, again, the time to deliver that page went down.

So it's the silver bullet, right? It's the answer to all our problems. Well, I want to say yes, but I would say it's pretty close. It's pretty close. There are some things we need to make sure we think about.

 27:08

The validation of the cache. Like I said before, it's something that you can really get bit on. You want to make sure that your environment can handle the cache and how you're implementing it, as well as making sure that you're invalidating your cache and removing it from cache in all the appropriate places.

Writing versus reading. If your application is very heavy on writing versus very heavy on reading, then it's probably not a good application for a lot of caching, because you can't cache a database insert. You can only really cache the results. That's another thing.

Dependencies are something you really need to think about. If I have a method that says, 'Get all of my posts' and I cache that, every time I go look at my posts anywhere, it gets it really quickly. Well, then, if I edit one of my posts, how do I make sure that I invalidate that list of objects that was returned? That can be a real tricky area. There's ways to take care of that, but it's something you really need to think about before you do your implementation.

 28:12

It doesn't mean that you can just implement it and everything will be just fine. You still have to optimize your queries, you still have to make sure your logic is correct. It doesn't fix a bad application; it makes a good application better.

So what I want you guys to take away are that caching can improve your performance, your scalability, and reduce your costs of your systems. It's very easy to implement. No matter what language you're in, they all have a way to cache. I can almost guarantee it. And if it doesn't, memcache probably can do it for you.

The results are life-changing. And I'll tell you, it is. On our team, we don't wait for our applications to get used enough to need it. After we get done developing it, we develop these load tests and then implement caching, because you never know that day that one of your applications is going to get Digg-bombed or some other site pushing users to your application.

 29:17

An intelligent implementation is needed. You can't and shouldn't just put it everywhere. You should put it where it's needed, and only where it's needed.

With that, that's kind of the last things I have. I did want to let you know that I have two colleagues presenting tomorrow in this same track, Alex Kingman doing The Evolution of Form Design and Steve Heady doing Reputation Systems in Web Communities. They're great presentations. I've seen them, and you should come to them in a while.

Any questions? Yes, Scott.

Scott: How do you use .NET? Does that truly store the data in memory, then? So are there problems with... Is there any time where that data gets stored on discs somewhere or is it truly just pulling that from memory?

 30:01

Jason Fish: Using memcache?

Scott: Well, whatever method here, so that you know you're getting the subsequent calls, are not going to your database, they're truly coming out of memory?

Jason Fish: Yes and no. You can put them in different places. With .NET, you can tell it to store it in the browser or in the cookie or on the user's... you can tell it different places to store that data so that you can push it as far to the user as you can without it... you obviously don't want to put student grades pushed into the browser and saved in a cookie, but things that make sense. You can put them where they make sense.

Not using .NET really doesn't make a difference. It just depends on... it's language-independent. I just use .NET because that's what we use as an example.

Yes.

Audience 7: Now, the stuff you showed us is all basically in the application. Is there any provision protection within the database itself doing something like MySQL in a way to cache its own queries? Some way that doesn't borrow the overhead of running the query over time? You see what I'm asking?

 31:11

Jason Fish: Yes.

Audience 7: I'm just wondering if it has to be done in the application or can it be done on the database set?

Jason Fish: It can be done on the database. I don't know as much about it. That's not how we've used it in the past, so I'm assuming that you have applications in different places that all call the same database and you're trying to look for a way to cache those results.

With memcache, you can put it in some place and call it from multiple applications so it doesn't have to be from the same application pool each time. So that might be an answer for you. But I don't know as much about the actual caching within the database.

Audience 8: You mentioned LoadStorm. Because I've got a lot of database stuff going on, if I wanted to get started in this, then obviously I've got to start with some kind of a benchmark so I know what's going on now. Is LoadStorm a tool that you use for that? Or if not, how do you get that data, or do you have that on your application... do you build that in?

 32:16

Jason Fish: What we do is we usually build the application with no caching to start with. Because we don't know where our pain points are going to be to start an application. We build the application the best of our ability, and then we start testing.

With LoadStorm, they have a free account for 20 users, and what you'll find is even with 20 users, a lot of times your application is going to break, anyway. Simultaneous users hitting your pages one after another after another.

What we do while we run that load test is we track our database, see which queries are being called, which queries are being called the most, which ones are taking the most time, and that's our starting point is we look at our database and see which queries are being called the most.

Audience 8: I guess I'm wondering, does LoadStorm have access to your database, then?

 33:02

Jason Fish: No. No, no, no. LoadStorm just hits the page and then tells you how fast it goes back. These are two different things we run simultaneously. We run the load test, and that gives us how fast the pages are being loaded, how many requests you're making per second, that kind of thing, whereas we also put a trace on our database to see while the load tracking is going on to make sure we can watch our database and know what's going on.

Does that make sense?

Audience 8: Yeah, yeah. I was just wondering about the database side of what exactly... what tool are you using to actually look at the data. Is it just something with the database at all?

Jason Fish: With SQL Server, which is what we're using. It comes with a SQL profiler, which does it for you. I'm not sure about other tools for other databases.

Yes.

Audience 9: We also use JMeter, which is an open source tool by for load-testing. Previous patches... bells and whistles, it can be a bit of a pain in the butt to get working, but...

 34:05

Jason Fish: We started down that path of JMeter, and it was taking us too long to get started. With LoadStorm, you tell it what your homepage is, it loads up that page, it says, 'What do you want to do? Submit a form? Click a link?' and you do it, and it follows whatever path you give it. It's kind of nice. But we did definitely look through JMeter to start with.

Audience 9: In IS 12, you can also turn off slow logging, which you probably want to do more out of your database than you can...

Jason Fish: Oh, I would completely agree. All this load-testing should be done on development servers as an auto-fix in any production environment.

Audience 9: I tell you when you set the threshold, so this query exceeded how many seconds, milliseconds, whatever.

Jason Fish: Some other question?

Audience 10: It seems like compared to beta... kind of the equivalent to caching on the database?

Jason Fish: It's very similar. The difference would be there that you're caching the execution plan, but you're not caching the actual execution of the query. You're caching the building of the plan.

 35:08

Audience 10: ... stored procedures and stuff like that...

Jason Fish: We used to use stored procedures, but we've moved on to using link. It's a little bit faster to develop with.

Audience 10: Yes. It's gone a long way, too.

Jason Fish: Yes.

Audience 10: Especially with the way...

Jason Fish: But I'd say if you're still writing your queries in your application, I would first move you to stored procedures. That's a great first step.

Any other questions?

Audience 10: We can also say that Zend Framework, which is a PHP framework?

Jason Fish: Good to know. If anybody didn't hear him, he said the Zend Framework for PHP?

 36:02

Audience 11: Zend.

Jason Fish: Yeah. Zend? OK.

Audience 10: It's the company that made the language.

Jason Fish: OK. Other questions, comments? In the back there?

Audience 12: We just freezed a supercache over on our site and what we discovered was big cache in ways we didn't realize we were already caching. So for example, WordPress actually caches its RSS feed blog. And it does it on a 12-hour caching, which none of us knew. So we were just putting all of our time trying to get a supercache and figuring out what the heck is going on and it turned into actually something else. I think that the moral of the story for us would be where everything that's going to be caching, because things can start to act within...

Jason Fish: All right. Any other questions? All right. Thank you very much.

[Applause]