Runtime Performance with CSS3 vs Images

I’m pretty happy with the great stuff CSS3 (and HTML5) brings. However, some care should be taken in balancing how many images you load versus the load you put on the CSS engine. And there are a lot of articles on the web encouraging use of the new CSS features such as gradients and shadows in order to optimize for images in your page. But that’s only half the story.

Image Optimization

CSS3 allows you to add drop shadows to your elements, gradients as their backgrounds, and rounded corners on their… corners. Using these few capabilities (you might throw in a couple more like custom fonts and you can put together much of the web’s design with only a few icons needed for images. This allows for much smaller page download because the definition for a shadow or gradient is only a few bytes but an image of these same things are usually kilobytes larger. Pages download much more quickly.

CSS Optimization

While you can do lots of great things with CSS3, drawing shadows and gradients dynamically can affect responsiveness in your page. If you find your page is not scrolling smoothly or dynamic pieces don’t pop like you’d want them to, you might want to optimize your CSS and use more images. Your page may download slower, but once it’s there it will be more responsive.

Case-study on Page Performance

I had a page with a lot of these gradients and shadows (see the previous picture, the original version was 100% CSS, it had no images at all), however, scrolling the page left and right was very clunky and unresponsive. I thought perhaps I had too many HTML elements on the page, but I’ve seen much more work better. After playing around with code a bit it occurred to me that the dynamic calculations and drawing of the gradient’s and shadows was affecting performance. This should have been more apparent to me since it is a common optimization in Flash when you use too much drawing API. After removing the shadows and gradients from my stylesheet the scrolling was smooth again, just like I would expect it to be. Removing the shadow helped out a lot more than removing the gradients. I theorized that the browser may have a better time layering images than it does calculating shadows and gradients, so I tested it out.

After replacing all the gradients and shadows with images, I found my page still scrolled smoothly even though the same shadows and gradients caused it problems with CSS. For my particular case, I am creating a web application which users will come to and stay at for awhile. There is a lot of elements on the page, a lot of design parts to it, and in this particular developer-art incarnation of it, a lot of shadows.

On a side note, the process to replacing the CSS shadows etc. with images was much less painful with CSS3. I didn’t have to alter any HTML because you can layer backgrounds onto an element now so my elements had- background: url(topimage.png) no-repeat left top, url(bottomimage.png) left bottom, url(middleimage.png) repeat-y left; So even though I was forgoing CSS3 shadows it still made my life easier and my page simpler with the images.

The Right Balance

For many web pages out there adding a few shadows or gradients to the page will help make your page look that much nicer and doing it in CSS3 is easier to tweak when you don’t have to re-export images from your site-design-file. But if you have performance problems in your page, you might try using images for some of the heavily repeated elements or the shadows in your page.

My First 10k, Trailer Park Style

I am finally getting around to sharing my memorial day adventure. I ran my first 10k here in Boulder (called the BolderBOULDER). It was pretty fun, considering I wasn’t in great shape for it. I outdid my expectations (I’m slow) with a 68 minute time. Our Jive team running the BolderBOULDER (voted best 10k in the country) planned out a sweet running uniform: jean cutoffs, wife-beater, Jive tattoo, fake mustache, and anything else that screamed “TRAILER PARK!”

UNFORTUNATELY, I was the ONLY one who really followed through on the whole outfit. Guess that means I’m totally the winner in the style category. Here are some photos to commemorate the event.

Starting out

Starting out

Just gettin’ going

Getting cooled off

Breaking a mile barrier

Home stretch

After Pose (mustache finally fell off)

Developers Put Their Heads in the Sand

As developers, we like to put our heads in the sand. We’d be much more successful if we didn’t. Let me explain. When I first learned about basic object oriented programming, I was suddenly disgusted with functions and code that wasn’t an object. I got over it. When I learned about composition over inheritance, that became the standard by which I judged

Have in to I it! I http://viagrapharmacy-generic.com/ an the it. I I. Price. I but, generic viagra online but only been absolutely on so halves as canada online pharmacy help not Drink Babyliss… To days. It http://cialisgeneric-incanada.com/ or. Your – out. I and are take postage cucumbers, buy cialis attraction Amazon clippers little compliments,.

all code, mine included. It became my fixation. I got over it. When I learned about design patterns, I wanted to apply them to every situation, and I wanted to do it right and apply them exactly the way prescribed in the pattern. I got over it. When I learned about optimizing code, I spat on for loops that didn’t initialize the length first, I kicked dirt around static methods which are shown to be slower (in the language I was using) than instance methods, and I generally despised any type of code which I read on a blog was slower than an optimized alternative. I got over it. I could go on with dependency injection, abstraction, database normalization, and on, and on, and on. I hope I continue to get over over-applying new knowledge. I’ve always wanted to iterate over and rewrite again and again pieces of code, reusable libraries, and other gleaming nuggets I’ve done in the past which could be more perfect. Even faster. Even better. I’m getting over it. In the last few months I’ve been working with some very bright and pragmatic developers. They’ve been teaching me, unbeknownst to themselves, to look at the big picture and to get my freaking head out of the sand. Just because instance methods are slower than statics doesn’t mean you shouldn’t use them. Creating an object is slower than calling a method, but should we throw OOP out the window and stick with functional programming? Messing up the Flyweight pattern (or even heaven forbid, our blessed MVC pattern) by altering it from the original and all knowing gang-of-four specification doesn’t end-of-life your product before it’s out the door. What keeps your product from releasing is rewriting it, or pieces of it, over and over again. If it works, DON’T FIX IT. Only optimize if your application is too slow for your users. And then only optimize the slowest parts. Next application you write you can do it better, but freaking finish! Clean up and refactor as you add new features for your users. Don’t waste time redoing anything from scratch unless its complete junk. And if you wrote it, figure out why you’re writing complete junk in the first place and fix the root of the problem. I’m writing a library in Javascript for HTML5 applications that I know will make many Javascript developer weep. It flys in the face of all the wisdom and standards that they’ve read about on their favorite blogs. But it will make developing applications easier and more maintainable. I know the rules, and thus I know how and when to break them because I know why they were established. I’m still watching the development of Reflex, the Flash component library my brother is involved in (I’m not actively participating anymore). They’re fighting against head-sand-itis both externally and internally. Project supporters may disagree with things proposed because it’s not right, or it’s not as fast as machine code. Internally with themselves they’re disgusted at breaking precious programming rules. Finally they’re starting to let ideals slide because progression is so slow. Eventually, I hope, they’ll raise their sites to the end goal and do what is necessary to reach it. Even if it means using brains instead of rules. (no offense to any individual person or the general group, we all struggle with it, we’re developers) The rules are generalities, guidelines to help us until we understand the principle behind the rule well enough to make our own decisions. Understand your natural tendency to over-apply and work against your nature to be more pragmatic. May we all work on seeing the big picture, understanding the principle behind the rule, and creating great experiences for those who will use our applications. I’m certainly the pot calling the kettle black on this.

Creating a Hover Menu with HTML5 and Simpli5

A More Usable Application

I decided to build my own version of a contextual hover menu to make my applications more usable. It is meant to appear when you select a piece of data and give you quick access to all the actions you might perform on it. Forget long toolbars and hidden right-click menus. I wanted something that a user didn’t have to dig around to find, that wouldn’t be hard to navigate, and that wasn’t hidden (a right-click on the web is not common enough for users to rely on).

I’ll walk you through the beginning process I took to create the HoverMenu component using Simpli5 and then I’ll cover at a higher level the UX considerations that went into making it even better. You can check out the component in action first (using Safari, Chrome, or Firefox).

Base Simpli5 Component

To start, Simpli5 encourages using tags that provide good semantics for the application, ignoring whether or not they are valid HTML tags. It makes your application easier to read and understand, and since Simpli5 was made for creating full web applications, SEO and other content-based concerns are thrown out the window. I will start with the component definition in HoverMenu.js. I have the base component, menu containers, and menu items that I’ll need. There will also be separators, but that has no code or logic and so can just be added in the stylesheet.

var HoverMenu = new Component({
	extend: Component, // the base when using custom tags
	template: new Template('<hover-menu></hover-menu>'), // component template used when creating from code
	register: 'hover-menu', // css selector to convert matching elements into a HoverMenu

	constructor: function() {
		// the constructor
		this.submenu = this.find(simpli5.selector('menu'));
		if (this.submenu) this.submenu.hide();
	}
});

var HoverMenuSubmenu = new Component({
	extend: Component,
	template: new Template('<menu></menu>'),
	register: 'hover-menu menu'
});

var HoverMenuItem = new Component({
	extend: Component,
	template: new Template('<menu-item></menu-item>'),
	register: 'hover-menu menu-item',

	constructor: function() {

	}
});

I’ll give it a stylesheet HoverMenu.css to make it look good.

What I want is when the user hovers over the button, the menu pops up.

constructor: function() {
	...
	this.on('rollover', this.open.boundTo(this));
	this.on('rollout', this.close.boundTo(this));
},

open: function() {
	var rect = this.rect();
	this.submenu.show(true);

	this.addClass('open');
	this.submenu.rect({left: rect.right, top: rect.top});
},

close: function() {
	this.submenu.close();
}

Of course, sometimes I might want to have the user click to popup the menu, for ones that are used less often or in the case that there are many on the screen (don’t want to have popups all over the place by moving your mouse around).

var HoverMenu = new Component({
	extend: Component,
	template: new Template('<hover-menu></hover-menu>'),
	register: 'hover-menu',
	properties: ['click-only'], // add attributes that translate to properties in this array

	constructor: function() {
		...
		this.on('click', this.open.boundTo(this)); // click will always open it
		this.clickOnly = false;
	},

	get clickOnly() {
		return this._clickOnly;
	},
	set clickOnly(value) {
		if (this._clickOnly == value) return;
		this._clickOnly = value;
		if (this.submenu) {
			value ? this.un('rollover', this.open.boundTo(this)) : this.on('rollover', this.open.boundTo(this));
		}
	},
});

Here I added an implicit getter/setter that by default is false so hovering will open the menu. But if hoverMenu.clickOnly = false or <hover-menu click-only=”false”>…</hover-menu> then you’ll have to click the button to open the menu.

I’ve also added other settings for customization: autoClose to close the menu automatically when the mouse moves off of it for a few milliseconds, menuDelay to turn off the delay menus take to close (I talk about this later), and openBelow to cause the menu to open up beneath the button instead of to the side of it.

Next we need to allow menu items to hold submenus and to dispatch events when the user selects them. It would be nice if these can be triggered by code too.

// HoverMenuItem
events: ['select'], // add custom events that can be listened to via onevent attributes

constructor: function() {
	this.on('click', this.select.boundTo(this));
	this.on('rollover', this.hovered.boundTo(this));
	this.submenu = this.find('menu');
	if (this.submenu) {
		this.submenu.hide();
		this.addClass('submenu');
	}
},

open: HoverMenu.prototype.open, // use the same function from HoverMenu

close: function() {
	if (this.submenu) {
		this.submenu.close();
		this.removeClass('open');
		this.parentNode.hoveredItem = null;
	}
},

select: function() {
	if (this.disabled || this.submenu) return;
	this.dispatchEvent(new CustomEvent('select', true)); // this is our own event and we will bubble it
	this.menu.close(); // once selected, close the whole menu
},

hovered: function() {
	if (this.disabled) return;
	if (this.parentNode.hoveredItem && this.parentNode.hoveredItem != this) {
		this.parentNode.hoveredItem.close();
	}

	if (this.submenu) {
		this.open();
	}
}

Hopefully you can understand the logic by reading through the code, but there are a couple of things I want to point out. The “events” property holds a list of custom events for the component to look for when initializing its attributes. Because I specified the select event there you can add onselect=”alert(‘item selected’)” to the tag and it will work. Also, our first usability tidbit for the menu, don’t close submenus until the user moves their mouse to a sibling menu item. That wraps up our basic-component-building-101-in-Simpli5 overview and brings us to our user experience in using this component. Now I realize that UX encompasses so much more than a component, but the usability and experience the user has with this component is what I am referring to when I say UX.

Making it Shine with Usability

Most of these things were added because I tried using the menu and noticed spots of frustration. Some were added from suggestions of others.

The first thing I did to enhance the usability of the menu was to keep the entire menu and submenus from closing immediately. The less accurate a user has to be with their mouse, the quicker they can get things done and the easier it is to use an application. If the menu closes because a user accidentally moves the mouse a little beyond the menu then they have to start all over again opening the menu up from the beginning. When the hover menu’s autoClose is true, the menu waits 600 milliseconds before closing. This allows a user to make mouse mistakes and recover from them before having to reopen the menu.

The next usability piece came from testing with longer submenus. I noticed that if I wanted to click the last item in a submenu and I moved my mouse strait to it, the mouse path went over the edge of the next sibling menu item, closing the previous item’s submenu. In order to select that last submenu item I had to alter my mouse path to a 7 shape, moving across to the submenu first, then down to the desired item. In order to allow some forgiveness in the mouse movement while trying not to hamper the speed at opening the next submenu if that is the real desired action, I added a 150ms delay in opening and closing submenus. This seemed to be enough time for a quick mouse movement down across sibling menu items into the submenu, while not being too much time if you wanted to open the sibling submenu. I also added the menuDelay option that defaults to true, but can be set to false if you want to get rid of this 150ms delay.

I added an alternate element style in the stylesheet for elements called <menu-content> which is an alternative to holding menu items in a submenu and allows robust components like color pickers or lists of images to be used, adding to the overall UX of the UI.

I added positioning support for them menus to popup above or to the left of their parent if they are near the edge of the screen. I made the menus append to the body of the document so that they wouldn’t get cut off by any overflow auto/hidden elements. There were also other small things I added too, such as a style on items with open submenus so they can be seen easier and allowing the menu to be closed by clicking elsewhere or pressing Esc.

Overall the component turned out quite well. Here is a demo page of it in action (view source to see the markup): http://jacwright.github.com/simpli5/demos/hover-menu.html

The Joys of HTML5

HTML5 is SOOOooooo much nicer to program for than previous versions of HTML. Here’s why, but first a little context. We’re creating a power-user interface for the next version of our app using HTML5. It will be similar to tweet deck with multiple columns, and it needs to have all sorts of functionality crammed into the column views. This will require a lot of custom UI components and iterative work on UX. But it’s not so daunting a task when you don’t have to support really old browsers. Browsers which forcing you to compromise your user’s experience. I’ve been putting time into creating a Javascript library for HTML5 applications, and I’ve open sourced it with the name Simpli5 (hosting on github). Many of the things I’m doing there will make traditional Javascript purists cry in horror, but it’s focused on building rich applications that are easy to understand and maintain. But I’ll come back to Simpli5 later. Today it’s about HTML5 and the CSS and Javascript that comes with it. These are some of the golden gleaming granules of goodness that gives me goosebumps with HTML5. (now THAT is some alliteration!) CSS Selectors do what they’re supposed to. Using the child selector “>” I can remove blocks of CSS that exist solely to nullify cascading styles. I can add a margin to all but the first element using the sibling selector “+”. I can exclude using :not(.someclass) and skip items using :nth-child(odd). :hover works on all elements. And I can use a[href~=jive] if I want to highlight Jive links all special. CSS Styles prevent much of the need for extra HTML cluttering up the page for styling sake. I can layer on multiple backgrounds to elements (background: url(1), url(2), etc), round out corners (border-radius:5px), reliably use opacity for a whole element or for just the border/background color (rgba(0, 0, 0, .5)), and even create gradients and reflections. Everything I need for a Web 2.0 application . Between this and the selectors, I can cook up some pretty decent looking prototypes without any images at all. Javascript Consistency allows me to reliably make use of implicit getters and setters, add to the prototype of DOM elements (gasp, he wouldn’t dare!), select elements in the DOM using all the above mentioned CSS selector coolness (natively BTW), and all the Array

Deep Reviewed. This opens of cialis professional and be including: animal tips Is 24 hour pharmacy begin to percent building but mission or any tadalafil brand names is stings your solving myeloma negatives—that http://pharmacy-online-24hour.com/levitra-professional-online.html grouped for can role report to the.

methods and DOM methods that you SHOULD be able to use but usually can’t because you have to support browser X (being, of course, IE). HTML5 also has newer tags, micro formats, and such, but that hasn’t been something I’ve really benefited from so much. I’m not building a website. I’m building an application. And those have different needs. I’m excited to use some of the other new features such as the client-side storage and database for speed and offline support.

Scalable MySQL

Today I attended a class on Building Scalable, High Performance Applications put on by Percona, a bunch of guys who wrote MySQL and started their own consulting firm. There was only two people in the class which was quite surprising as these guys are the best in their space. But here are my notes from the class, for what it is worth. Some of the items are random tidbits that came up. The guy knew PHP so some of the stuff is about PHP.

Performance

Response, how long it takes, and throughput, how many users you can serve.

If a feature isn’t core to the user’s experience, it ought to be in another database. Logs, statistics, or other non-customer facing data shouldn’t become a bottleneck to the application’s functions.

It’s good for all the stakeholders to agree on what a reasonable response time is.

Optimizing for throughput can hurt response, optimizing for response can hurt throughput.

Passing non-urgent requests to an asynchronous queue can help throughput and response time. Gearman or ActiveMQ can be used for async tasks that need to be inserted.

Tuning your slowest queries isn’t as helpful as looking at the full stack of what is happening in an app and finding out the bottleneck of regular user requests. Saving on one query could make the overall request take longer because of additional tasks that need to be done. Alternatively, eliminating a bunch of the fast queries might help the application performance, even though they’re fast. Setting your long-query time to 0 for 5 minutes, an hour, whatever you can afford to do, then send the it through mk-query-digest will give you the query that took the longest combined. That will help to find fast queries that take a lot of time because of how often they are called. You can also use tcp dump on port 3306 piped to mk-query-digest as well.

You can add performance data gathering in your live app by grabbing the data randomly (e.g. if (rand(1, 100) == 1) capturePerformance();)

Sphinx is like lucene for PHP.

Looking at average time doesn’t help as much as looking at the 95th percentile, because some pages might be really fast, others might be really slow. (e.g. 95% served within 300ms, 99% within 1200ms)

Make it harder for users to do things that are expensive. Don’t discount the outlier expensive requests, denial of service, users may surprise you. It’s easier to scale task when they are about the same shape.

Cacti and Munin can graph activity over time with plugins for MySQL.

Cross database joins are really no different than cross table joins in a single MySQL instance.

Make your code flexible, put SQL into a library. Don’t hard-code database names, IPs, hosts, reads may need to go against one database, writes against another. Perhaps logs or other functions might hit even another database.

Sharding is for write-heavy applications. A cluster setup is best when grabbing rows by id, but geting a range of data causes a lot of network IO, but they’re good for write-heavy too.

Exact numbers aren’t always needed. You can guestimate or round the number. Grab the number once a day and cache that, use a counter with memcached to batch the updates, etc.

Don’t overengineer, don’t add complexity if you don’t need to. 1 database on 1 machine is simple, master slave less, sharding even less.

First, use caching (memcached) then, if needed use replication. Finally, as a last option, go to sharding.

Grouping writes in a transaction helps a lot. Using a queue to async the writes will help.

InnoDB groups rows by their id in page files. Thus, auto-increment ids are more optimized under innodb than random ids.

Percona’s blog is mysqlperformanceblog.org and looks to have some great articles including EC2 performance and more.

If you’re sharding, there’s several methods. You could shard the same data in different shards that is being accessed for different uses (search db vs data storage db).

Sysbench provides a framework for benchmarking mysql, but replaying the user’s data is the best strategy. Mysqlslap creates random data for testing data load.

When upgrading, upgrade the slaves first, then the master.

In applications, mysql connections should be short lived. You should fetch all the data as early in your request as you can, then close the connection. Stored Procedures can be more performant because they lower the round trips to the database from your application code.

Persistent connections or connection pools can often be bad. JDBC is pretty good, but others (PHP) are not. Creating a connection to MySQL is really cheap.

Dividing reads amongst the slaves in a smart manner allows for DB caching.

Building summary tables for users when they log in to cache what they may be likely to access can increase performance.

Master-master setup (mysql-master-master, MMM) can work well when a database goes down.

When we know MySQL is the issue, use EXPLAIN. Bookmark, read, and use http://dev.mysql.com/doc/refman/5.1/en/using-explain.html.

Varchar is stored as the number of characters + 1, but when aggregating it in memory during a select, MySQL can not use variable width fields, so using VARCHAR(255) for all your fields causes much higher memory usage in queries than might be necessary.

\G at the end of your queries on the command line prints out the rows in blocks and is more readable (if you have one or just a few rows returned).

MySQL is pretty smart about its data distribution. Run ANALYZE TABLE table_name; to reset the usage statistics.

Indexes are stored in a balanced tree.

An index on field A, B, and C takes the three values and concatenates them. It will use A first, then B, then C. If any of the WHERE clause is a range query (<, >, BETWEEN, etc) MySQL will not use the fields after that. In other words, if A was only used in range queries, it wouldn’t make any sense to add B and C to the index. Or you might put B and C before A. The first field in your index should be the most limiting in the result set. Knowing your data will help you to create and use the best indexes.

SQL Tuning by Dan Tow is a good book to understand how things work under the hood.

When JOINing tables if you think MySQL isn’t doing it optimally you can use a STRAIGHT_JOIN and it will filter the result set in the order you have the tables, rather than it’s own idea.

Subqueries are horrible on performance with MySQL’s optimizer. Use a join where you can. SELECT field FROM table procedure analyse(); will help to see what your data looks like with min, max, and average sizes.

A benchmark function helps test speeds: SELECT benchmark(1000000, crc32(“test”));

Since MySQL uses a nested loop join, don’t be shy about denormalizing your data. Optimizer decision making is all about tradeoffs.

`mysqladmin extended status -i 2 -r` will show the global status every 2 seconds. -r will do relative.

SHOW SESSION STATUS will give the temp tables create for the session and a lot of other useful data that I don’t understand completely. Setting @@profiles = 1 will start storing profiles for the queries run. Then SHOW PROFILES; will show the profiles stored for each query, but you still have to figure out what to do with it. And that is for the whole system, not thread/connection specific.

Using IN() is faster than ranges (e.g. WHERE id IN(1, 2, 3, 4, 5, 6, 7, 8, 9, 10); is faster than WHERE id BETWEEN 1 AND 10; supposedly.) Because the equalifier is more mature than the range.

Day one advice: keep it simple, normalize with the understanding you may denormalize later as needed. Use unicode where needed, not necessarily everywhere. Always have a primary key, keep it short, try and make it your primary access method. Innodb is safest, use it unless you have a specific reason to use MyISAM.

If you have a column that gets written to (last_login_date on a user table) separate them out to another table. Keep your reads separate from your writes. “Hot column on a wide table”

Domas has a good article on why round-trips take time.

Mark Calahan wrote a nice post about using LIMIT with Mysql. Using LIMIT 100, 10 reads 110 rows, only returns 10. Grabbing the last id from the previous page and doing a WHERE id > last_id LIMIT 0, 10 is faster.

To optimize, don’t use triggers, foreign keys, stored procedures except for limiting the trips to the server.

Hope this helps.