Should your wireless carrier care what you're saying?

What if your local telephone company charged you differently for different kinds of calls? Suppose they charged a percentage of orders you place with your broker or Land’s End or QVC? Or maybe charged you more if you use a modem? How would that make you feel?
If that sounds invasive, why do we put up with it from our wireless carriers?
While they seem to have given up (for the time being) their dream charging vigorish on Web transactions conducted over cell phones, they’re still desperate to discriminate between voice and data and charge differently for them.
This doesn’t make sense. On digital systems, voice and data are both transmitted as bits. And it’s not obvious, once you think about it for even a moment, that data uses any more bandwidth than voice. Digitized full-duplex voice takes a lot of bandwidth for the entire conversation. The typical online connection (email and web browsing) contains a lot of idle time and the high-bandwidth bursts are probably a lot shorter.
The most likely bandwidth-hogging applications are file downloads and streaming media. These are easily dealt with by charging by the minute, just as you would for long, intensive voice conversations.
There is no excuse for metering wireless digital connections by the byte.
But there is a reason: the wireless carriers spent too much money for spectrum and G3 hardware in anticipation of gouging us. It’s time to allow them to fail.

An AP Stylebook for URL's?

Jeremy Zawodny agrees that URL’s matter. He says they should be short and guessable. URL’s that aren’t are usually the result of developer laziness.
He then goes on to tell the story of a long thrash at Yahoo about how a certain URL should be formed. I’ve been in similar discussions and they rapidly get tiresome. But I’m glad someone’s taking the time to do it.
We have standards for Internet ad formats, why can’t we have them for URL’s?

"We were once so close to Heaven, Saint Peter gave us medals declaring us the nicest of the damned."

We’re on the verge of an historic achievement–and we are also on the verge of throwing it all away.
One of the benefits of the Internet bubble was that everyone felt they had to get on the Web — in a hurry. And so we dumped a few billion pages of information on HTTP servers in the only format we had (humble HTML) and cast our fate to the winds. A lot of what we did was badly done, but we did it, and people started using it.
Over time, people started linking to and illuminating that information with a complex network of web pages. Finally, all those threads are woven together by the search engines–especially Google which extracts as much information from the links as from the pages.
But this tapestry is full of holes, and the holes are growing.
It’s beyond dispute that the value of the Net as a whole is greater if the information on it is available freely to all. It’s less clear whether individual publishers will be better off if they give their information away to the Web community.
The value of this information has never been really unlocked. Most of it is stored in large databases like Lexis Nexis and Dialog, where skilled researchers pay $120 per hour to massage it. It’s a surprisingly small business, only a couple billion dollars a year. It’s a lot smaller than, say, the Internet advertising business.
Imagine the value that would be created if this resource were unlocked and freely accessible and linkable. But we seem to be moving in the opposite direction.
Publishers, justifiably concerned about the optimizing their shareholders’ return, are beginning to turn to a new wave of technologies and business models to increase the value of their own business at the expense of the Net.
They’re moving more of their information into proprietary databases, charging subscriptions for access to their web sites, and –this quarter– flirting with proprietary data formats.
It’s beyond belief that any publisher would trust their livelihood to Microsoft’s good intentions at this late stage. Or Adobe’s for that matter. Or even smaller companies. All these companies are promoting formats for future electronic publications that are designed to make them more like print — unlinkable, uncopyable, unsharable. But also indecipherable without reader software.
But why put your product in formats controlled by companies that have demonstrated their intention to extract increasing rents from their vassals? Why wouldn’t you use formats that are not only more powerful, flexible, and future proof — but are also free for the taking?
These moves on the part of publishers are mostly boneheaded and self-defeating. But they can do a lot of damage in the short run, or even the long-run, if the right publishers make the wrong moves for too long.
Anyone in an organization considering such a format should be asking pointed questions about the long-term licenses being granted by the format owners and the caps on any license fees. And what these promises will mean when the format, or the reader software, or the platform is upgraded.
You should also consider the effect of the kind of forced-upgrade policies that Microsoft implementing now for their operating system and applications software. It’s doubtful Microsoft or Adobe will make any promises about future fees, but you should ask yourself what those promises are really worth.
Microsoft sees owning your software as a license to extract rents from you in perpetuity. And the only limits on those rents is what the market (collectively) will bear. It’s also important to understand than any sane software company (Adobe, for example) will do exactly the same thing if given the same position.
[Thanks to They Might Be Giants for the quote in the headline]

Another reason to love the Net

This morning, Steve Outing posted a piece about the small amount of content in the markup on some news sites. Later this morning, I posted a piece (below) calculating the content/code ratios for some major news sites. The evening, Adrian Holovaty not only creates and publishes ratios for a bunch more sites and a tool for calculating the content/code ratio for any web page.
I’m still stunned by the velocity with which information travels on the Net, and how the amount of information increases with each step.
Also, RSS syndication has made it possible for people to communicate with Web sites almost as quickly as they could with bulletin boards. And with a much higher quality of discourse.

News sites: quadruple your bandwidth at no additional cost — guaranteed!

Gary Stock of Nexcerpt tells Steve Outing that 90% of some news pages consist of stuff the reader never sees. Only 10% is actual “content”. The rest is structure. This has a couple of implications.
First, it takes four or five times as long to load the page as it should.
Second, all that markup and javascript has to be executed, so there’s even greater overhead associated with it. Table tags in particular take a long time to process.
Third, this is yet another powerful argument for cascading style sheets, which load only once and persist in the reader’s cache.
I tried this out on the San Jose Mercury News home page and he’s right. There are 81756 characters in the page source, of which roughly 8000 (less than 10%) are content — including navigation text and some script elements.
Here are my (rough) calculations of content to total HTML ratios on the home pages of some popular news sites:

  • Wall Street Journal: 22380/59810 = 37%
  • Google News: 16908/62881 = 27%
  • News.com: 9355/46857 = 20%
  • Yahoo News: 11000/56601 – 19%
  • New York Times: 10253/75262 = 14%
  • San Jose Mercury News: 8000/81756 = 10%

It’s hardly a coincidence that the sites with the highest content/markup ratios also have the most news on them. The Wall Street Journal fits three times as much news into three-quarters of the space as the Mercury News.
Imagine a technology that could quadruple your bandwidth (and reduce your overhead) at no additional cost. Would you adopt it?

SprintPCS "Vision"…feh!

Well, I’d pretty much come up with a solution to my hunger for a mobile connection to the Web: SprintPCS Vision’s all-you-can-eat plan. I have determined (thanks to Google’s Usenet archives, no thanks to SprintPCS) that you can connect a Mac to one of their phones and get it to connect to the Net. But you can’t buy the cable in their stores. It’s only available online–for $70.
But here’s the kicker that was never explicitly explained in their advertising or their news releases: the offer doesn’t apply to computers connected via their PC card wireless modems or to one of their new phones via their USB cable.
The relevant passage in SprintPCS’s agreement is here, thanks to Slashdot:

Sprint may deny or terminate service without notice where use is in connection with server devices or host computer applications, other systems that drive continuous heavy traffic or data sessions, or as substitutes for private lines or frame relay connections. Unlimited PCS Vision offer for PCS Free & Clear Plans with Vision is: (a) only available with a Vision capable PCS Phone or PCS smart phone device; and (b) not available with Connection Cards, Aircards, or any other device used in connection with a computer or PDA – including phones, smart phones or other devices used with connection kits or similar phone-to-computer/PDA accessories. Sprint reserves the right to deny or to terminate service without notice for misuse.

Now, it appears that you can probably get away with using this connection occasionally, which is what I want to do. But clearly, they make no guarantee you’ll ever be able to use this service with your computer, or that they won’t deny you service arbitrarily.

Building the Net's directory

Doc Searles come up with an idea so elegant and right that it’s amazing no one has thought of it before: Let’s create a directory of the world’s web sites.
Why can’t we have a repository where we can post information about our sites:the last time a site has been updated, how it’s organized, how it’s marked up, how it should be indexed. RSS readers could look there instead of sucking down XML every fifteen minutes. Search engines would go there to find new sites, instead of making us us notify them individually that we’re waiting to be indexed.
It’s everything that the whois database(s) should be, but never dreamed of being.

Good taste costs no more

Knight-Ridder Digital’s web sites are ugly, indistinguishable, slow, and confusing. And they make money. Ergo, they don’t need fixing.
That’s the baffling message of a story in Business 2.0. It’s interesting that this story comes out the same week a worldwide study that shows consumers associate clarity and ease of navigation with credibility and other research that shows that advertising clutter lowers response. KRD’s pages are cluttered with noisy house ads.
Business 2.0 misses some important points in their own facts.
First, KRD’s revenue numbers look unreliable: “At the Fort Worth paper, customers are offered a two-day newspaper/30-day online combo called the FlexAd for about $150 less than the newspaper ad alone. ” A lot of the revenue is apparently an allocation of print revenue.
Second, the model looks insupportable: “anyone placing a $230, 10-line Sunday classified ad in the San Jose paper gets a seven-day Web ad for $50 more.” KRD seems to be pricing their Web advertising relative to print and not to online competitors. How does this price compare to Craig’s List or eBay?
Third, whatever success KR Digital enjoys is apparently the result of reducing fixed costs, which is a key online strategy. But to quote Richard Gump, good taste costs no more. You can’t defend bad design on the basis of cost savings.
Business 2.0 fails to answer the most important question. How much better would KR Digital be performing if its content management system worked better, its sites were easier to navigate and less ugly, and local editors and publishers had some control?

Media double standard

A lot of magazines’ guaranteed circulations are lies. As a former magazine circulation manager that doesn’t surprise.
It’s pretty easy for magazine publishers to lie. The auditors have loosened the rules and are pretty cozy with the companies being audited (sound familiar?).
Even so, magazines can’t even tell you how many of their “readers” actually looked at the magazine, let alone any of the ads.
The article linked above goes on to say:

Why do we all (advertisers included) winkingly put up with the Big Lie? “Because everyone does it,” says one editor. “Advertisers are just used to paying more per reader than they or the magazines are willing to admit.”

Please remind me of why they’re unwilling to buy Internet advertising, where every “page view” represents an actual page viewed?

Nice computer. It would be a shame if anything were to happen to it.

Should we have to pay to keep popup ads off our computers? Steve Outing quotes online media consultant Mark Potts as suggesting it.
There’s nothing wrong with charging for ad-free content. But there is something wrong with having to pay to prevent pop-ups. First, any decent browser (i.e. not IE) already lets you defeat pop-ups. Second, I shouldn’t have to pay anyone not execute unwanted programs on my computer. That’s extortion.