It's official: WSJ and Guardian UK anti-science shills

Posted by tigerhawkvok on February 22, 2010 18:23 in General , anti-science , public science , internet

This is just a friendly public service announcement — ignore any science (which is usually actually "science") from the Wall Street Journal and the Guardian. They are both pretty uniform in being united against climate change (the broader issue, not even just anthropogenesis), with ocassional alt-med quackery and such. I'd nail them on evolution coverage but for now I'll give them the benefit of the doubt and assume they're just subject to the media's usual poor coverage of the subject.

If you'd like some pretty good, easily-accessible sources on climate change, check out:

  • "Tamino" is a researcher who works with the climate data, and frequently posts statistical breakdowns and debunkings of common claims. While the debunkings aren't instantly findable, they're quite thorough when you find them.
  • Skeptical Science has a list of frequently used arguments with knockdowns, citing peer-reviewed papers.
  • RealClimate is a site run by various climatologists.
  • "How to Talk to a Climate 'Skeptic'": A large number of articles sorted by class of argument and then by subargument.

So, yeah. WSJ? "Scientist says X" is meaningless unless it is peer-reviewed, and even more meaningless when they're not climatologists, and a "maverick" flying in the face of the consensus is not actually privvy to any special data. Also, stop saying "scientists" like it's a magical-catch-all phrase, or I'm going to have to start calling non-scientists "humanitists". A physical anthropologist or chemist has no special, extra-noteworthy climate position.

Ask yourself — what reasonable evidence do you need to demonstrate that climate change is happening? Will you honestly admit that you would be willing to change your position when you are confronted with that evidence? I have done some models of generic planetary temperatures, so I know many of the influences; I further, perhaps six years ago, as uncertain as to the anthropogenic nature of the argument. Soon after, I saw a long-term solar data analysis which removed the only reasonable alternative candidate from the equation. Further evidence keeps building to support anthropogenic climate change. You also have to look globally, and not just at the United States (which many of us in the US are prone to do). I don' think there is any evidence short of the catastrophic that will convince those of the Guardian or WSJ.

For various other topics, just ask and I'll put some links up when I get a request. Seems like the end of this post got slightly off-topic, huh?

Embed HTML5 Video Seamlessly

Posted by tigerhawkvok on January 18, 2010 23:02 in programming , internet

Designing for some sites this weekend, I finished up a PHP wrapper that enables HTML5 <video> playback seamlessly on sites. The whole process is broken into very few steps:

One-Time Steps

  • Download LongTailVideo's JW Player, extract it, and put player.swf into a directory named "modular".
  • Into the same directory, save this fallback, explanatory image.
  • Create a folder named "videos".
  • Place the following code in an included PHP file, or somewhere else in your PHP document:
function embedVideo($file, $width=NULL,$height=NULL,$title=NULL,$poster=NULL,$force_mime=NULL)
  // code based on http://camendesign.com/code/video_for_everybody
  // encode video used by this as Ogg and h.264 / mp4
  // Make sure the $file provided is the FULL URL to the files, with no extension
  if($width==NULL) $width=640;
  if($width==NULL) $height=360;
  if($poster!=NULL) {
    $flashposter = "&image=$poster";
    $poster = "poster='" . $poster . "'";
  else $flashposter="";
  $objheight = $height + 15;
  $swfheight = $height + 20;
echo "<div class='video'>
       <video width='$width' height='$height' $poster controls='controls'>";
if ($force_mime) {
  //Mimetype fix for certain server configurations
  $location[$len-1]="?name=" . $location[$len-1];
echo "
	<source src='$file.ogv' type='video/ogg' />"; 
if ($force_mime) $file=$fileold;
echo "
	<source src='$file.mp4' type='video/mp4' /><!--[if gt IE 6]>
	<object width='$height' height='$objheight' classid='clsid:02BF25D5-8C17-4B23-BC80-D3488ABDDC6B'><!
	[endif]--><!--[if !IE]><!-->
	<object width='$width' height='$objheight' type='video/quicktime' data='$file.mp4'>
	<param name='src' value='$file.mp4' />
	<param name='showlogo' value='false' />
	<object width='$width' height='$swfheight' type='application/x-shockwave-flash'
		data='modular/player.swf?file=$file.mp4" . $flashposter . "'>
		<param name='movie' value='modular/player.swf?file=$file.mp4" . $flashposter . "' />
		<img src='modular/no_vid.png' width='$width' height='$height' alt='$title'
		     title='No video playback capabilities, please download the video below' />
	</object><!--[if gt IE 6]><!--></object><!--<![endif]-->
       <p>Download Video: <a href='$file.mp4'>'MP4' (h.264)</a> | <a href='$file.ogv'>'OGG' (theora)</a></p>

Now, go into the "videos" folder. Create a file named ".htaccess" and add the following:

AddType video/ogg .ogv
AddType application/ogg .ogg

That will work on many server configurations, but not all. In case your server is stupid (ie, will not honor the .htaccess file), also create a file "index.php" and insert the following:

$video = basename($_GET['name']);
if (file_exists($video)) {
  $fp = fopen($video, 'rb');
  header('Access-Control-Allow-Origin: *');
  header('Content-Type: video/ogg');
  header('Content-Length: ' . filesize($video));
} else {
  echo "404 - video not found";

OK, your server-side stuff is done. You only need to do all that stuff once.

Each Time

Now, you can make your video. Make your video however you want, but when you're done, head over to firefogg.org in Firefox 3.5+ and convert your video. Essentially any settings available there will work (as the point of the extension is to leave it playable in a browser). You can use other programs, but this one is more-or-less foolproof. Then, use Handbrake to convert your video into H.264. Make sure your video is saved as *.mp4, and has the same name as the *.ogv file. You should now have two versions of your video. Dump both of them in your videos folder.

Now, embedding your video? It's easy. Just stick in the following code when you want to spit out your video:


The first argument is a string with a path to your encoded videos. Don't include the file extension; the code takes care of that. All subsequent values are optional. The first one is video width, second video height, third video title, fourth a thumbnail for the video (I use the method suggested by the Theora Cookbook), if you made one. The final value only does anything if it's TRUE. Stick in "TRUE" if the video isn't playing in FireFox, and it'll feed the video through the index.php in your video's directory and it'll work automagically.

The code is XHTML5 compliant, and has no rendering errors when submitted to the browser with the XML mimetype. I've tested it in IE6, IE8, Chrome 4, Firefox 3.6, Opera 10.5, and Safari 3 for Windows.

Enjoy! I need to update LifeType to get it working on the blog, but you can see the code in action on one of Velociraptor Systems's sample pages. This code is free to use, and released under Creative Commons/GNU Lesser General Public License 2.1.. Let me know if there are any problems implementing it!

You, Data, and the Internet

Posted by tigerhawkvok on December 10, 2009 12:25 in General , computers , internet

This says it all:

A Day in the Internet

Created by Online Education

SVGs and Browsers

Posted by tigerhawkvok on November 05, 2009 20:15 in computers , programming , websites , internet

First, I realize I missed this week's Tuesday Tetrapod. I'll put up a double-feature next week — but I've been trying to meet a personal deadline and didn't quite have time to give the TT the attention it deserved. So, let me touch a little bit on SVGs and, thus, in a roundabout way, what's been occupying my time.

First, "SVG" stand for "Scalable Vector Graphic". As the image at the right demonstrates, zooming in on raster graphics such as bitmaps, PNGs, or JPGs introduces artifacts. Further, if you want to reuse the image, you cannot scale it beyond a certain resolution. Vector graphics, on the other hand, are very much like text in that their descriptions are essentially plaintext describing how lines arc. The lines arc the same way no matter how zoomed in you are, so they are rendered on the fly by computers. This means that they are infinitely resizeable, and retain fidelity limited only by the physical pixel sizes on your monitor.

Now, the problem is, of course, Internet Explorer. It's not the only problem, but the main one. See, IE doesn't support SVGs at all. Just can't do a thing with them. While Firefox has finally passed IE6 in market share, IE still holds ~65% of the global market (though on technical sites, its market share is closer to 20%). So this was a game-stopper to SVG adoption.

However, in October Google released the SVGWeb library, which is a simple Javascript library that renders SVGs on IE in Flash, and displays them as a Flash document. Suddenly, in one blow, SVG has a nearly universal level of application. If NetApplications is right, a combined implementation has a 98.88% market share penetration. Now, this isn't quite right, as Flash still doesn't support 64-bit browsers, so users of Internet Explorer 64-bit are out in the cold — but most users of 64-bit operating systems, even those that use IE, use the 32-bit version, so that's pretty negligible.

Now, with that hurdle done with, we have a few other hoops to jump through. First, of Gecko, Webkit, and Presto-based browsers (read: Firefox, Safari + Chrome, and Opera), Gecko and Webkit each incompletely support external SVG resources with the two tags that should work, <object> and <img> (Presto properly supports both). Gecko does not support <img> at all, and instead displays the alt-text; Webkit supports <object> but manipulating sizes and such does not work.

Now, I wanted to use an SVG-based logo on my site, and I was fed up with this implementation problem, so I wrote up a simple PHP library to get SVG working conveniently on multiple browsers. Now, to use an SVG, all you have to do is


Where everything in the square brackets is optional (do note, however, that if no height or width is specified IE will default to 100px by 100px). So, for example, on this implementation test page, the logo with the yellow background is just in:

	    <div style='background:yellow;width:500px;height:600px;'>
	      <?php dispSVG('assets/logo.svg','logo',450); ?>

That's it. Nice, simple, clean and cross-browser. All you have to do is extract the library to your server in a location, edit the $relative variable in svg.inc if it's not in the root directory, and paste the following into the head of your (X)HTML document:

require('PATH/TO/svg.inc'); // edit this path
if(browser_detection('browser')=='ie') echo ""; 

Once you've done that, you're done! Just call dispSVG whenever you want to display an SVG. By giving the ID or class calls a value, you can address your SVG as normal in you CSS. Nice and easy! It's not quite perfect, as CSS background only works with Presto (of course. Go Opera!), but it works for most uses.

Now, all that is because I'm working on creating a site to advertise my website creation and deliver built-to-order computers for customers. The computer building system is almost done, requiring some modification on the last four system types to give configuration options. Then I just need to generate some content for the other pages, and it'll go live!

So, I ask (what readers I have) a favor: Can you please check out the site, and give me any stylistic, functional, or content critiques, reccommendations, or comments you may have? With luck, soon it should have the proper domain http://www.velociraptorsystems.com/!

Telcos: Yes, they really are that wretched

Posted by tigerhawkvok on October 28, 2009 01:12 in computers , politics , internet

Oh man, this article from ArsTechnica just says so much. The skinny:

Monticello, Minnesota was getting bad internet service. The voters passed a referendum to have a municiple, fiber-to-the-home service. TDS, a local telco, sued. And sued. And sued some more. And after stonewalling the government for long enough, they unveilled today a 50 mbps symmetrical fiber-to-the-home service for all residents, at $49.95 per month.

Sample two:

Such stories aren't limited to Minnesota suburbs, either. Just last month Telephony Online ran a piece on how Cox cable prices had "dropped considerably" since Lafeyette, Louisiana lit up a fiber system of its own.

"Cox froze the cable rates in Lafayette, and they didn’t freeze the rates in other areas," said Terry Huval, director of the muni project. "We figured our citizens saved over $3 million in cable rates even before we could offer them service."

via Ars

Big surprise, we don't have enough competition and when we suddenly get it, hey presto, prices drop and service improves — even if the government (albeit local) has to inject competition into the marketplace.

So, first, this really drives home the whole "Telcos are retarted and kinda vaguely evil" point, second, we do not have enough broadband competition, dominated by two or three carries, covering 75-80% of customers.

Third — does this have an analogue I'm missing? Hm... oh wait. A goverment-sponsored public option is supposed to do the same bloody thing. You know, force competitive rates among insurers. Lower prices for Americans. Etc. But naaaah, we've never shown that'd work. Oh wait.

Yes, there's more to the health care bill than that, with conservative estimates sponsored by the America's Health Insurance Plans (read: insurance lobby) even showing a 47% decrease in premiums WRT today's levels with a no-PO bill. But it's what gives the bloody thing teeth!

More Net Neutrality (or, how Telcos buy Votes)

Posted by tigerhawkvok on October 27, 2009 13:14 in computers , politics , internet

Oh, net neutrality. As you may know, the FCC proposed rules for net neutrality last week and opened the stage up to commenting. Here are the proposed rules:

  • Consumers are entitled to access the lawful Internet content of their choice
  • Consumers are entitled to run applications and use services of their choice, subject to the needs of law enforcement
  • Consumers are entitled to connect their choice of legal devices that do not harm the network
  • Consumers are entitled to competition among network providers, application and service providers, and content providers.
  • A provider of broadband Internet access service must treat lawful content, applications, and services in a nondiscriminatory manner
  • A provider of broadband Internet access service must disclose such information concerning network management and other practices as is reasonably required for users and content, application, and service providers to enjoy the protections specified in this rulemaking

Now, I do have some issues with this, including the framework for "reasonable" management:

But overall, I feel like this is a goo, large step forward. In my opinion, net neutrality rules should be quite simple:

  • ISPs may not discriminate, manage, inspect, or otherwise treat any set of data over their network in any way that is not applied unilaterally to all data over the network in an identical manner. Random fluctuations in network reliability are permissible, and subject to analysis by a p-test to confirm random distribution and other tests to determine such fluctuation is protocol-agnostic.
  • Access to content may not be restricted or controlled in any manner, unless such content comes from an IP address with zero legal content. At that time, restriction to access of that IP address is at the discretion of the ISP, but not mandatory.
  • None of these principles affect the legality of actions taken on the internet.
  • Violations of these principles are subject to a $100,000 fine (per instance).

In other words, I'd love to see network neutrality legislation that favors a "dumb pipe". But OK, that's fine, like I said — this is a good step. Who couldn't favor it?

It turns out that John McCain (and though it's not explicitly mentioned here, a number of other Republican representatives taking large donations from ATT, Verizon, and/or Comcast) have a bone to pick:

The money trail is actually painfully obvious. But, it doesn't change the fact that the openinternet.gov site is being deluged with anti-net-neutratliy comments from those getting their pockets filled by TelCos. The FCC is seeking out comments, and it's important that you comment — either the quick way on openinternet.gov's discussion page, or make a more "official" comment by submitting a filing on ECFS (Proceeding/Docket Number "09-191").

Help keep the net open! If you ever have doubts, just consider for a moment how hard it is to change your internet carrier ... and if they decided to, say, cut off Google from you, what your recourse would be. That is what Net Neutrality is about.

Google Wave and (no) Internet Explorer

Posted by tigerhawkvok on September 30, 2009 16:21 in news , websites , internet

To follow up on my anti-IE6 post, looks like Google's Wave project won't support IE. To quote Ars:

The developers behind the Wave project struggled to make Wave work properly in Microsoft's browsers, but eventually determined that the effort was futile. Internet Explorer's mediocre JavaScript engine and lack of support for emerging standards simply made the browser impossible to accommodate. In order to use Wave, Internet Explorer users will need to install Chrome Frame.

This isn't even just IE6 — this is all versions of Internet Explorer. Hilarious!

Website Development 2: Killing IE 6

Posted by tigerhawkvok on September 28, 2009 15:41 in websites , internet

Believe it or not, I'm finally following up a little on this 1.5 month old post on web development

So, in my standard course of RSS feeds, I ran acrosss Ars Technica's monthly web usage share post. I noticed a nice by-percentage breakdown of IE market share, and realized that Internet Explorer 6 has about 16% market share.

This is a browser that is eight years old, doesn't render transparent PNGs, doesn't render CSS correctly, has missing CSS selectors, and has 139 security vulnerabilities. Oh, for good measure, a number of Javascript commands outright crash it.

There are entire sites devoted to the push to eliminate IE 6:

Even companies, like 37signals, have phased out IE6 support.

Here's a quick thing you can insert into your own pages (right before </body>) for a subtle but effective notice to upgrade, taken from ie6update.com:

<!--[if IE 6]>
<script type="text/javascript">
	/*Load jQuery if not already loaded*/ if(typeof jQuery == 'undefined'){ document.write("<script type=\"text/javascript\"   src=\"http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js\"></"+"script>"); var __noconflict = true; } 
		icons_path: "http://static.ie6update.com/hosted/ie6update/images/"
<script type="text/javascript" src="http://static.ie6update.com/hosted/ie6update/ie6update.js"></script>

It's kind of insane that this browser is still around. Even if you run a small site — enough small sites start popping this up, and you'll get people's attention.

I also have a PHP setup that's more configurable, and if someone wants it, I'll put the code up.

Phylogenies Everywhere

Posted by tigerhawkvok on September 14, 2009 14:36 in computers , evolution , biology , programming , websites , internet

A surprising amount of this weekend was dedicated to working on phylogenies. I updated non-eusuchian crocodylomorphs, avialae through neoaves to at least extant order; minor updates to sauropoda, and an update on sauropterygia.

The site also finally got a long-needed search engine. It's not the most efficient thing in the world, but doing a binary search is essentially useless. I made it modular, so I may still end up seeing how a sort + binary search with preserved keys turns out, or cut out the cruft (change the amount of the document searched).

I finally also added a dirty implementation of tagging. I used the <tt> (teletype) tag and changed its contents to not display (CSS display:none). Thus, tags for an entry can be hidden in this element, and will be read by the search engine, but not displayed.

I also noticed that it was hard to find some of the sources I cited with just name and year — so I've started adding linkouts, DOIs, or ISBNs too all sources provided, to make it easier in the future.

Kind of looks less in writing than it felt like ... but I'm happy with the way it's going.

Crowdsourcing Science

Posted by tigerhawkvok on September 10, 2009 19:37 in dinosaurs , research , paleontology , internet

Mike Taylor, Matt Wedel, and a bunch of other folks have gotten together with an ambitious project: to crowdsource science. While this may seem bizzare at first glance, the idea is that there are many, many papers out there, with all sorts of information that can be useful — and in this case, they're looking for measurements on ornithischian limb bones. So, enter the Open Dinosaur Project. For science, for acknowledgements, or for possible co-authorship, just head on over, give it a quick read, and start contributing!

If you want to dive right in, just go straight to the page for contributors.

On a more personal note, this project has re-invigorated me to start looking at the last bits of data for my paper ... it's been sitting idle for too long, it's time for it to get out! It means I need to do some more proofing of it, in addition to filling in the empty bits — if anyone is interested in proofing, let me know.

Broadband Commentary Mark II

Posted by tigerhawkvok on September 02, 2009 14:26 in politics , internet

Remember my recent post on posting commentary to the FCC about broadband policy? Well, it suddenly just became more important. Via Slashdot, we have the following completely predictable in hindsight move by ISPs:

[...] [M]ajor internet service providers in the US are seeking to redefine the term 'Broadband' to mean a much lower speed than in other developed nations. In recent filings with the FCC, Comcast and AT&T both came out in support of a reduced minimum speed. 'AT&T said regulators should keep in mind that not all applications like voice over internet protocol (VoIP) or streaming video, that require faster speeds, are necessarily needed by unserved Americans.' On the other hand, Verizon argued to maintain the status quo, saying that 'It would be disruptive and introduce confusion if the commission were to now create a new and different definition.'

You read that right. The lousy USA ISPs are trying to lower our abysmally low standards even lower. If that happens, you can be assured our poor internet with high price will get poorer. Put it into perspective with Verizon's comment: the best of the lot of them wants to maintain the status quo.

Please, everyone, take 15 minutes and send the FCC your opinions on the state of broadband and what we can do to improve it, and get everyone you know to do it, too. There's always the off chance that enough nerds will say enough interesting things that we might get an improvement! Say anything at all ... the most simple comment along the lines of "not enough competition, poor speeds with respect to the rest of the world, tighten controls and increase the baseline" is enough. Say whatever you're comfortable with (and if you can, throw in something about what would be a good definition for broadband), but say something!

Update: Some baseline information for you:

With the caveats out of the way, what are the results? The median broadband user in the States is getting about 2.3mbps and uploading at 435kbps. That compares pretty unfavorably to some of the industrialized Asian nations, where the median download speed is 63mbps, or Korea, where it's 49mbps. European nations also do well, with Finnish users getting over nine times the bandwidth, and France over seven times. Even going north of the border to Canada would likely to get you a substantial increase in speed, as the median downloader there gets 7.6mbps.
Via Ars Technica, 2008/08/14

This year's analysis paints a slightly rosier picture in some ways, worse in others:

The 2009 speedmatters.org survey finds that the average download speed for the nation was 5.1 megabits per second (mbps) and the average upload speed was 1.1 mbps. These speeds are just slightly faster than the 2008 speedmatters. org results of 4.2 megabits per second (mbps) download and 873 kilobits per second (kbps) upload. In other words, between 2008 and 2009, the average download speed increased by only nine-tenths of a megabit per second (from 4.2 mbps to 5.1 mbps), and the average upload speed barely changed (from 873 kbps to 1.1 mbps). At this rate, it will take the United States 15 years to catch up with current Internet speeds in South Korea. Moreover, the average upload speed from the speedmatters.org survey is far too slow for patient monitoring or to transmit large files such as medical records.

The 2009 speedmatters.org survey also reveals that the U.S. continues to lag far behind other countries. The United States ranks 28th in the world in average Internet connection speeds. In South Korea, the average download speed is 20.4 mbps, or four times faster than the U.S. The U.S. trails Japan at 15.8 mbps, Sweden at 12.8 mbps, the Netherlands at 11.0 mbps, and 24 other countries that have faster broadband than we do.

Moreover, people in other countries have access to much faster networks. Ninety percent of Japanese households have access to fiber-to-the-home networks capable of 100 mbps. According to the Organisation for Economic Cooperation and Development (OECD), the average of advertised speeds offered by broadband providers in Japan was 92.8 mbps and in South Korea was 80.8 mbps download. According to the OECD, the U.S. ranks 19th in the world in average advertised broadband download speed at 9.6 mbps.
Via speedmatters.org

We are poor in terms of provided speed, and even worse in terms of serviced speed. We need reform!

Is it a weasel? Methinks it is.

Posted by tigerhawkvok on August 30, 2009 23:11 in computers , biology , programming , misc science , internet

Since I've wasted most of today on this (oh feature creep. And an extra "-1".), I thought I'd share. Reading around on Pharyngula, I found myself wanting to make a more customizable, even-less-preprogrammed version of Richard Dawkin's WEASEL program. So, I present you with a Python 3.0+ version of Dawkin's "Methinks it is a weasel" program. It will take any nonzero length starting string, has a maximum generational cap for local maxima, with customizable rates for single-bit errors (like SNPs), duplication errors, number of offspring, weighting for approaching the target sequence, and accepted variability, and whether bad mutations are penalized or not.

Sample output. Click for larger / more lines.

The point of the program, as published in The Blind Watchmaker, is to show that with random mutations over time you can expect to see something like order pop out of a random test string. The version I have below takes the genetic modelling a bit further, starting with any string, regardless of length, and duplication / omission errors will trim down to the right solution. In debug mode, there's an additional completely random selection every 500 generations to help pull the program out of local maxima, but it is not in the primary program.

Click the fold to read the source, or download it here


Broadband Commentary

Posted by tigerhawkvok on August 21, 2009 19:56 in politics , internet

Via ArsTechnica and Blogband, I found my way over to comment on the FCC's request for comment (sound redundant, much?) on broadband speeds:

The best way to define broadband is based on a 24-day mean of a benchmark (say, 100 MiB) file download from the ISP. This 24-day average allows for a new hour to test each day (from 0:00 to 23:00), the file size lowers the relevance of "bursting", and the duration weighs out day-to-day fluctuations. The days could sequence testing HTTP, FTP, VOIP, and BitTorrent traffic, repeating every four days. This would discourage packet manipulation to favor one protocol over another.

Further, I would strongly suggest that the FCC create a definition for both "basic" and "high-speed" broadband, as high-speed is used misleadingly. I would suggest (to spur innovation) that at least [1 Mbps down/768 kbps up] speeds for "basic" (upload speeds have been lacking, probably due to lack of incentive) and at least [5 Mbps down/2 Mbps up] speeds for "high-speed". In 2009, no one should call something broadband with less than a megabit average connection.

I truly feel that these small changes can help make the US competitive in broadband speeds.

If you feel strongly about the broadband situation in this country, I would encourage you to head over to the comment form. If that link doesn't work, go to this page and search for Docket 09-51, select the radio button, and scroll down and select "continue".

Website Development

Posted by tigerhawkvok on August 15, 2009 23:50 in computers , websites , internet

There's a degree of subtlty in webwork that is missed, but is particularly noticible when you spend a lot of time designing sites, be it for yourself or for clients. I think that I'll break this down into a few miniposts, to share my ideas on the topic, give a bit of threading through a few posts, and, well, to hopefully bring some ideas out. Right now, my plan is to look at:

  • Code validity
  • Aesthetics vs. Usability
  • Simple engines and security
  • Why I hate Javascript and Flash
  • Website modularity
  • Front-end website interfaces

Which brings us to this first post on code validity. Now, that is a bit of an obscure choice to start with. Really, who cares? The number of sites lacking a Doctype declaration is fairly amazing. People still mix display markup with content markup. What's with all this, well, kvetching about web standards?

Let's start with the doctype declaration. For example, this blog uses (for now, anyway, and valid as of this posting) the XHTML 1.1 Transitional doctype:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">

This tells the browser to use this style of interpretation when rendering the markup. In particular, the browser should handle certain not-strictly-OK bits of code, such as <input> tags existing outside of block level elements (that is, elements that occupy a chunk of a webpage instead of being able to exist in-line).

This touches on another aspect of valid code. Different types of elements have different nesting requirements. For example, the tag used to write this paragraph, <p>, is by default a block level element. It occupies a block of a webpage, and cannot have other block-level elements inserted into it, like a second paragraph element, or a <div> element (as a general rule; some elements are special, notably <div> and <table>). A block level element creates a new line when formed.

By contrast, inline elements, such as the anchor tag (<a>, most commonly used for links) are inline elements. They cannot exist outside of block-level elements. This seperation of block and inline level elements simplifies layout design and makes websites render more faithfully across browsers; when this isn't done, the browsers need to make a non-standardized choice about how to seperate these items. Do they form a new line or no? Are the spacings different? Does it implicitly form another block level element until closed? And so on.

The next item is a bit of a change from the old style of coding back in the 90's and early 2000's. It used to be that you could see bits of code like this:

<p><font color='green' type='comic sans ms'>Whee green!</font></p>

Now, you would see one of these:

<p class='greenpara'>Whee green!</p>
<p id='greenpara'>Whee green!</p>
<p style='color:green;font-family:comic sans ms,segoe print,pristina;'>Whee green!</p>

Which would all render

Whee green!

So what is the difference? In the first example, the code's style information was a different element, both block and inline, that would live with the content markup. However, in the latter three, the content is just the paragraph element, and the styling information is referenced by a class identifier (can apply to any number of items with that class), and ID (just one item), or a different style of code, applied to the entire paragraph element. This is known as CSS, or Cascading Style Sheets. This means you can actually have multiple styles for a page, as well demonstrated by the CSS zen garden, which has one page with multiple stylesheets to give it vastly different stytles. Less amazingly snazzy ways of seeing the same thing can be seen in the beta site I had done (wow, has it been a year?) for the UCMP. Over here, you see the full site in all it's CSS glory; over here you see the site with the external CSS sheet stripped out, and only inline elements remaining, and here an alternate color scheme that never got off the ground (and is thus incomplete).

Part of the beauty of CSS was that it allowed stylesheets to be moved externally to other files, and then referenced by <link> or <style> elements. This means the one stylesheet could be used by multiple pages, and be updated seperately and have it reflected across all pages simultaneously. For example, you can see the layout code for this site here.

If you want to check out the validity of your code, you should check out http://validator.w3.org/ — at the time of this writing, the front page (and all the posts on it) have been marked up correctly and return a valid page.

Till the next entry ...