Shirky's Group Essay/Keynote:

And the worst crisis is the first crisis, because it’s not just “We need to have some rules.” It’s also “We need to have some rules for making some rules.” And this is what we see over and over again in large and long-lived social software systems. Constitutions are a necessary component of large, long-lived, heterogenous groups.

Ed sent this to me, and it looks like it’s become quite the posh in the blog-world. Also, there’s quite a bit about wiki’s, e.g.,

It really quickly becomes an assumption that a group can do things like “Oh, I took my PowerPoint slides, I showed them, and then I dumped them into the wiki. So now you can get at them.” It becomes a sort of shared repository for group memory. This is new. These kinds of ubiquity, both everyone is online, and everyone who’s in a room can be online together at the same time, can lead to new patterns.

More Webservice Breakthroughs in Enthusiasm

Zane’s a wee-bit more bombastic than I, but we usually agree on the essential ideas, e.g.,

First off, their technology is neither cool nor
unique. SOAP is nice enough, but it’s pretty weak. All
this shit about “once everyone uses X, everything will
work together” is like saying believing that once
everyone adopted TCP/IP, everything would work
together.

Well, unlike this crap that’s coming out
now, everything does use TCP/IP, and you know what,
I still can’t go to http://www.tvguide.com and click on my
favorite shows and have my TIVO record them for me any
more than I can quickly and easily send hate mail to
all these morons writing about how shit is going to
“revolutionize everything.”

Not that some of this stuff won’t be used, and things
won’t get better, but it took, what, 15, 20 years? for
TCP/IP to make it so I can ping your computer, I
highly doubt that SOAP is going to mean my toaster
will email me when my toast is ready by next June.

Which is to say, it’s a well done protocol for interop between systems, but a protocol alone isn’t going to do much for you.

The "Should" Rule

A code base of excellent quality will have constraints inherent in the
flow and structure of the objects and code. It should make much of
what people test for simply impossible to achieve, and therefore
eliminates the need for testing.

. . .

And that’s what a mature system should look like, one where
mistakes are not “checked” [by unit tests], but rather one where the
common mistakes simply cannot happen.

(Italics mine.)

Taking those
words
straight up, and slightly out of context, I remember a
little coding rule I’ve been kicking around in my notes: if you find
yourself using the word “should,” you need checks and/or process to
verify that whatever statement you’re making is true or works — not
only in the present, but in the future. That is, in the world of
programming “should” is equivalent to “can’t be trusted to”; tests
should be done appropriately.

“Code should work.”

Code should always work, and it
should always be perfect and bug free. The code I produce
should work under various conditions, even the simplest ones,
and, when people, including myself, modify either (1.) my code, (2.)
code that uses my code, or (3.) code that my code uses, everything
should still work correctly…given that there are no bugs in
any of the code involved.

We’re all perfect programmers, so this has always been ya’ll’s
experience, right? There are always bugs, of course, and that’s why we
always say “should always work,” instead of “always works.”

“Should” is Helpful

As I’ve mentioned
recently
, we programmers aren’t as scientific as you’d think,
we’re incredibly superstitious. However, in this case of “should,” our
computational voodoo is actually beneficial.

When you’re talking about a piece of code that you know isn’t
perfect, you often have an uncontrollable, unconscious urge to use
“should.” If you know the code is perfect, you don’t.

In this way, “should” can actually be quite helpful for detecting
when tests are needed. When “shoulds” start popping up, it’s time to
test and verify. Of course this isn’t the only indication that tests
are needed, just one of the many. Additionally, you can train yourself
to never use the word “should,” but that’s cheating ;>

Unit Tests

More on topic with Zane’s
original post
, I don’t quite agree with the phrasing of “Don’t
Unit Test,” even with the “5-20% test coverage” clarification.

Solo vs. Group Coder

It’s important to understand that
Zane is a solo programmer: he’s the only one working on his code
base. Under such circumstances, doing unit tests is really up to the
discretion of the single coder; essentially, unit testing becomes an
optional tool for the solo-coder, not an essential tool to assure a
group of coders can collaborate rapidly and “correctly.”

If I were a customer, I’d certainly favor a product that had lots of
unit tests, but I don’t think many customers are programmers

As a solo-coder, one plays the role of requirements gathering,
specifying, coding, and testing all at once: there’s no need to
coordinate communication between these 4 distinct roles. In group code
projects, these roles are often played by different people, and
different groups of people at that: one group of people gathers
requirements, another writes up the requirements (use cases,
architecture, design, etc.), another write all the code, and still
another groups of people tests it.

Unit Testing Benefits

Unit tests help these 4 groups coordinate and work together. A good
suite of unit tests…

  • specifies what the what the code does: what inputs cause what
    outputs. If the requirements people have gotten themselves involved
    enough with the creation of unit tests (something that too often fail
    to happen), they can assure that the tests validate many of their
    requirements, esp. data-centric ones, e.g., “Data will be formatted
    day/month/year.”
  • tells the coders the minimum behavior that must be coded: that
    is, once the coder has written code that passes all unit tests 100%,
    they know they’ve done the minimum amount of coding required. I think
    this is a greatly under appreciated and poorly used aspect of unit
    tests. They tell programmers exactly what to code: “if your code pass
    these tests you’re work is done.” Of course, more coding is
    often needed (along with new tests for that new coding) and
    it’s extremely difficult to computationally test the system as a
    whole (the “application” rather than the individual parts of code),
    but it’s nice to know that you’ve reached a certain baseline.
  • reduces the amount of manual testing that must be done: unit
    tests are run automatically, not by hand. In business — sad as it can be socially
    — the less involvement by actual human, the better. ’nuff said.
  • helps you fearlessly
    refactor
    . The ability to refactor your code base — improving
    code without changing it’s outward facing behavior or, at least,
    effect — is priceless and can be a tremendous boon. But, when
    refactoring even the smallest piece of code, you must re-test your
    code base to assure everything still works: without unit tests,
    there’s really no way to do this. Michael Feathers has an
    interesting MS in progress
    essentially about this very point;
    there’s also a <a
    href="https://www.objectmentor.com/resources/articles/WorkingEffectivelyWithLegacyCode.pdf&quot;
    12 page article for those who want a shorter read.

Those are just 4 cases where unit tests can and do help out. Unit
tests are good and helpful if people use them correctly and
effectively, just like any tool. I’m suspicious of claims that
writing unit tests take up too much time and hurt the development of
systems: testing every get and set is, of
course, stupid to do by hand, but that follows from the general rule
“Don’t do stupid things.”

Virtualization, VMware:

Virtualization is one key component of that vision. By allowing administrators or even automated management software to move computing jobs easily from one hardware system to another, virtualization makes it easier to upgrade hardware, allocate more computing horsepower to a given job, adjust to equipment failure, or make other changes.

The Crazy XML Chart

Several folks told me today they liked the recent XML-related posts (on RDF and webservices), so I thought I should pass along the crazy XML chart that’s been floating around recently.

It has a few questionable things on it — like GIF, PNG, and JPEG — but the point is: if one were to be a master of XML, all these crazy things would be pies you’d have your fingers in.

I like XML, don’t get me wrong. It just gets overwhelming and it’s easy — often, too easy — for XML to become a golden hammer.

Apple's Asia-Pacific VP Interview:

“Coming from a Wintel world, I was pleasantly surprised to see a very heightened passion in the people who work in the company. They have the feeling that ‘I work for a creative and innovative company.'”

As a result, Ho’s management challenges are not about motivating executives. Rather, he focuses on dealing with the diverse and sometimes opinionated people whom Apple attracts: “The challenge is to manage a group of creative, passionate people. I want to harness the creative energies of every individual in the organisation.”

Java I/O Sucks Goat Ass

I’ve said it before, and I’ll say it ’till I loose my voice: I fucking hate Java I/O. All I want to do is get all the contents of whatever’s in a stream, a reader, buffer, or whatever the hell else I have a handle on. Can I just call something like the below:

String contents = new File("somefile.txt").getAllMyShit();

Of course the fuck not! I have to keep track of byte arrays, char arrays, or some stupid ass shit in a while block. I just want the God damned content, I don’t care how the JDK gets it out of there.

OK, I feel better now.

More Yeti the Dog


Yeti

I put up some more fotes of Yeti, my dog.

Update: Kim took Yeti to the vet today. He’s gained 6 pounds in the last two weeks: he weighs 59 lbs. now. Also, he has some sort of bacterial infection called “stivies” or something. The vet gave Yeti a 14 day prescription, so hopfully it’ll go away.

Late Night Heraclitus:

If one does not expect the unexpected one will not find it out, since it is not to be searched out, and is difficult to compass.

I’ve been thinking about old Mr. “Everything is fire” recently. I’m not sure why, but he’s always fun reading, e.g.,

  • Sea is the most pure and the most polluted water; for fishes it is drinkable and salutary, but for men it is undrinkable and deleterious.
  • Disease makes health pleasant and good, hunger satiety, weariness rest. (A fine tagline for The Gay Science.)
  • Heraclitus somewhere says that all things are in process and nothing stays still, and likening existing things to the stream of a river he says that you would not step twice into the same river.

"Is the Semantic Web Hype?":

The following statements are nonsense
“RDF is more semantic than XML”
“RDF allows us to reason concretely about the real world”
“The power of RDF is its semantic model”

I came across this excellent presentation by Mark Butler, of HP, today. It manages to explain RDF and the semantic web through concise lists and quotes from the XML/RDF world.

Watching the RDF wheel spin around in the proverbial mud has always been interesting, but disappointing. There’s an ass-load of text — or “churn” as some call it — spent explaing what seems like a simple concept, i.e.,

RDF Term Example
Subject DrunkAndRetired.com
Predicate Created by
Object Coté

S.S. Abstraction

In the more concrete coding world, we have the concept of “over-abstraction”: basically, the design for something is so high-level and abstract that it’s useless for any practical application. Ed dubbed this concept the “S.S. Abstraction.” Usually when the S.S. Abstraction docks at your port, you spend a lot of time writing and talking about design before writing a prototype or executing any code; that is, there are completly groundless design claims made. You’d think that programmers are very scientific and numbers oriented, but after just a slight dip in the stream, you realize that we’re very superstitious, non-Baconian type people: we practically follow our own form of computational voodoo.

Back to RDF…

After reading the presentation, esp. the quotes pulled from XML big-wigs, my feelings that RDF is an example of the S.S. Abstraction in the standards world seem sound: the RDF standard appears to be evolving without enough testing for it’s usability as a technology; that is, how useful and easy it is for programmers to use RDF.

On a brighter note, it is a very young standard, and there does seem to be quite a bit of self-corrective kick-backing going on. As one of the quotes in the presentation says,

25 years ago, Ed Feigenbaum described
Terry Winograd’s work (on Artificial
Intelligence) as a “breakthrough in
enthusiasm.”

I worry that web services and the semantic
web, in their reliance on effective
computational semantics are vulnerable to
the same criticism.

If I May be so Brazen: "Webservices…ugh!"

All of these concerns and recommendations are exactly what make my stomach curl when I think of webservices, e.g.,

Design the interface as a dictionary, though not as an object based wrapper around dictionaries (ie prefer a Map over a Bean). I’ve seen Python and Lisp code that does this well as they have good support for meta-class hacking; and it’s sometimes called data-driven programming in the Lisp world.

Ahhhh! To me — a type-safety, contract-based coding, OO nut — webservices are a massive step backwards into the procedural flaming swamp-world I despise.

JSR 174: Monitoring and Management Specification for the JVM:

A specification for APIs for monitoring and management of the JavaTM virtual machine. These APIs will provide Java applications, system management tools and RAS-related tools with the ability to monitor the health of the Java virtual machine as well as manage certain run-time controls…

. . .

The majority of the existing monitoring and management options and techniques are very limited, lack functionality, degrade performance, and are unreliable and non standard, leading to a multitude of disconnected solutions.

Demeter's Law Remix

I like this
version of Demeter’s Law
: “Don’t use more than one dot.” Yuh!

A longer version, and following explanation, can be found
at the Pragmatic Programmer’s site
:

What that means is that the more objects you talk to, the more you
run the risk of getting broken when one of them changes. So not only
do you want to say as little as possible, you don’t want to talk to
more objects than you need to either. In fact, according to the Law
of Demeter for Methods, any method of an object should only call
methods belonging to:

  1. itself.
  2. any parameters that were passed in to the method.
  3. any objects it created.
  4. any composite objects.

On a slightly related note, Zane recently emails:

Indeed, Henney suggested it himself
something like:

iterator.next();
doSomething(iterator.current());
doSomethingElse(iterator.current());

you save yourself from unnecessarily having to create a
snapshot variable like:

Object snapshot = iterator.next();
doSomething(snapshot);
doSomethingElse(snapshot);

Yet, I maintain that the first piece of code is
actually quite dangerous. If you ever go multithreaded
and don’t explicitly protect your two calls with a
synchronization (which will degrade overall
efficiency), then you might be acting at on different
variables at different times depending on how the
iterator is coded and who has a handle on it.

To which my first response is, in the immortal re-phrasing by Kinman,
TU ES CORRECTO, SENIOR! SI!” Using an
Iterator in this fashion is indeed a very bad
idea. But, what if the object wasn’t going to suddenly change state
because some schmuck-thread called next()?

Keep Returned Values

The rule of thumb I tend to follow is
that if I’m going to be using a return value from an object more than
once, I put it into a local variable, e.g.,
snapshot. This is more a code-readability issue than
anything else: it looks less cluttered. On the other hand, one could
successfully argue that it does seem to clutter up the code
more; what’s clutter and what’s not isn’t often black and white. (If you’re one of these people, you’re probably also nuts for
code like,

new DoSomething(new Date(), new String [] {"param1", "param2").execute();

That shit drives me crazy, but some folks like it.)

Query Once

On a more technical, and paranoid-design
school, note one doesn’t always know what a seemingly simple
getSnapShot() type method will do: it might be more than
an innocent JavaBean property, e.g., it might call across the network,
call into the DB, re-calculate the value, etc.

More importantly, you can’t predict what that method will do in the
future: getSnapShot() might be a simple JavaBean
property now, but someone might change it, making it more complex and
time intensive. Obviously, when making that kind of change, you’d want
to go through and check all the code that uses your new, slower
version of the method…but there’s always a wide gap between “want”
and actually doing: though good tools make finding calling code brain-dead
easy, programmer laziness often saps even the ability to right-click.

Debugging

As a last item, assigning the return of
getSnapShot() to a local variable makes debugging
slightly easier: you can just inspect the local variable rather having
to get your debugger to execute getSnapShot() and show
the result. (The same goes for the parenthetical referenceless new
class instance example above.)

Traits of Testers vs. Developers (PDF)

Good testing is governed by the scientific model. The �theory�
being tested is that the software works. Testers design
experiments, as Kaner says in Testing Computer Software,
to see if they can falsify the �theory.� Good testers
know how to design experiments, and they often benefit
from previous study of the sciences. Good testers think empirically,
in terms of observed behavior.

Developing software, on the other hand, is much like
creating theories. Laws are specified and rules are applied.
Many developers have benefited from studying
mathematics. Good developers think theoretically.

Developers who are focusing on their theory of how
the software works might dismiss a bug report that describes
behavior not allowed by their software theory.
�That can�t happen; it�s not possible.� Good testers focus
on what actually happens, and like other experimenters,
keep detailed logs. �Well, it did happen; here are the circumstances.�
People who are used to thinking theoretically
have a hard time accepting aberrant behavior without
some kind of explanation. Good testers are skeptics,
whereas good developers are believers.

There’s several other good comparisons between testers and coders…and all in just 4 pages!

Link:

The Daily Life of a Programmer: Recent Internal Request/Plea

#1 Rule: PLEASE do not bake or microwave fish here! I’ve had several complaints about the fish
smell today. In fact, it is more than 1.5 hours later and there are still comments & questions about
the terrible smell in here! Crescent came over and sprayed. I still had to spray a 2nd time. I know,
my request for no fish is not a — rule. It is a personal request. I think I can say several people
would agree. Just remember several people use the breakroom throughout the day.

Singing the Praises of the 12" PowerBook, Yet Again

People keep asking me about my laptop, so I thought I’d post one of the latest detailed replies. The end conclusion: strong buy ;>

Date: Tue, 24 Jun 2003 08:26:53 (PDT)
From: Coté
Subject: Re: PowerBook
To: Zane Rockenbaugh

> I’m going to go on vacation soon, but don’t really
> feel comfortable not working for 5 days, so am
> probably going to get another powerbook. You got the
> 12.1″ version, right? Let me pose you a few questions…

I love my laptop: it’s the best computer ever! The only potential
problems are the hardware trade-offs you make for the small size: that
is, to make a really good 12″ laptop, you have to give up a few
things. I think it’s easy to get over those things, but you
should make sure you can too.

I like the 12.1″ a lot, but I like my computer stuff small so as to
never make me think “hmm…this is too big to take along with me.”
That is, I want all my gizmos to actually be portable, not just
mobile.

My previous laptop, a Dell Latitude something, wasn’t portable by
my standards. Also, the battery life is important: the Dell would
only last an hour at best, the Mac lasts for 2-1/2 or so hours.

On my JavaOne trip, I never had any problems using the laptop: it
was always the perfect size for travel, for sitting in the conference
room (or in the hallways) typing, or for carrying around. On that
note, it’s size and weight allow you to get a smaller laptop bag: you
don’t need one of those boxy, foamy ones anymore.

I went over to REI and bought the “Pee Wee” Timbuk2
bag
. It’s a perfect fit and I don’t look like a dork when I carry
it around.

The problem with the 12″ is upgrade limitations: I can only upgrade
to 640 megs of RAM, which I have. The hard drive only goes up to 80
gigs, though I stayed with the 40. I really wish I could get a gig of
RAM and a faster processor in it. HOWEVER, given everything else, I
think the trade off’s are acceptable.

But, if you’re more of a power use than I am — that is, if you
obsess over CPU speeds and caches and all that — you might feel better
buying the 15″: you can upgrade the RAM more, get a faster processor
(?), and it has a PC Card slot.

And, be sure to get the Airport card: it doesn’t come with the
12″. I’d also recommend maxing out the hard drive; I never did much
multi-media stuff (pictures and MP3s) with my PC because it was just
tedious and difficult: but with the Mac it’s so easy, and I can see
that I’m going to fill up my 80 megs fast.

As far as other accessories, I bought the iGo Juice AC
adaptor
. It’s pricey at $100, but it has US A/C plug,
auto-adaptor, and airplane adaptor. On the North West planes I took,
there weren’t any airplane power outlets, but having a car adaptor is
nice. The whole thing is quite compact and well designed. Make sure
you get the one with that’s Apple compatible.

So, that’s about it: I say get a 12″, max out the RAM and HD, get
the AirPort card, and then get the iGo thing if you want an extra
power adaptor with car feature. I’d get the laptop, HD upgrade, and
airport card from Apple.com — I think the prices are pretty much the
same as you’d find elsewhere, and you probably don’t have a choice
with the HD. I bought my RAM at http://www.crucial.com/, and it was
about 30-40% cheaper than Apple (I think); so I’d buy that after
market.

Get one: they’re great!

C# Interview:

But if you look at our design goals, I think that you’ll find some core differences between C# and Java. Java values protection higher than C#, and C# values capability a bit higher. C# is willing to tolerate a higher chance of abuse for increased expressiveness, which is why we have things like unsafe code, operator overloading, and unsigned types.

Typed vs. Loosely Typed, the Debate Continues:

For me, source code of dynamically typed languages is easy to write but hard to read. I keep hunting for the places where objects are created to find out about their types.

. . .

[A]t some point — high school, college, I don’t remember — I pretty much stopped using pencils. I prefer pens, I think for their stark, strong lines. When I screw something up, instead of erasing, I always scribble it out. Maybe excessive pen use is a sign of a more static-typing-oriented personality.

What is Guerrilla Development?

Being at SEI Level I Maturity means that it is likely that your software department, and likely your whole company, is a complete mess. Only heroics and daredevil acts bring victories. There is no notion of control; there is only risk mitigation. There is no process; there are only people. Often, your goal is simply to survive to fight another day.

CORBA, RMI, and the Next Big Thing:

CORBA, the Common Object Request Broker Architecture, is a specification for distributed objects that has been around since the late 1980’s. The goal of CORBA was to provide a language independent standard for middleware (both the basic protocols and for standard pieces of infrastructure). After an initial wave of hype, CORBA never really lived up to the expectations that were set for it, and it has since been mostly displaced by other technologies. In this talk, we’ll present an overview of CORBA and discuss whether or not it succeeded, and what lessons can be drawn for future distribution mechanisms.

This is a facinating, lengthy presentation on CORBA’s history and use. There’s a comparison with RMI at the end which is interesting as well:

If I tell you an application uses RMI, you know a lot more than if I tell you an
application uses CORBA.

I love these kind of technology postmortems.

Grosso’s other presentations look good too.

Acrobat Retrospective:

Adobe built a reputation on its Photoshop and Illustrator software for digital artists, but last quarter more than a third of its $320.1 million revenue came from Acrobat-related products for creating and managing digital forms — eclipsing all other product groups, including the one that makes Photoshop.

The article has a good, but concise, history of Acrobat; I didn’t realize Acrobat was 10 years old. With the good screen on the Mac, there’s almost nothing better than getting my hands on a good PDF and reading through it in Acrobat Reader 6.0. It’s got an excellent interface and the document is crisp.

Integration Turf:

“Everything [in the application] can be a service. The job now is not building monolithic applications but wiring services together,” Freivald said. “We want developers to think of adapters as services.”

CBLT

My previous employer, The Cobalt Group, bought one of their competitors a few months ago. When I was there, everyone was quite impressed with Cowboy’s lead managing product, Prospector. If you take a look at their flashy site, you can see why: they know how to do sweet tasting eye-candy.

Cobalt and Cowboy are said to both be J2EE based (when I left, Cobalt was in the process of converting one of the their larger products, Lead Manager, from Perl/Apache to J2EE): it’d be interesting to study how well the two systems are integrating with each other, software wise and people wise.

There’s also a little note about Nitra, Cobalt’s internal J2EE based platform:

Version 2.0 of Nitra, the fifth major release of the platform in the past 12 months, was announced in January at NADA 2003 in San Francisco and is now in use by eight (8) OEMs and over 2,500 dealers.

5 releases in 12 months! It must have been a hellish year for the Nitra team.

Bogartin' Broadband

I’m here in Pearland — Houston — and I just happened to see that the AirPort is picking up WiFi from somewhere; the brother-in-law’s neighbor’s have their network open. This is the first time I’ve slunk onto someone’s WiFi, and it’s pretty damn cool.

The connection is spotty, but it makes me get all excited for the day when the network is everywhere, just hanging in the air waiting to be plucked.

Just One Drink

bushwald: There’s no such thing as “just one drink.”
bushwald: There is such a thing as “just one drink, and then get peer pressured into 5 drinks,” however.
kimskotak: That very well may be the truest thing you’ve ever said to me

Macintosh Justification

An interesting thread: “The challenge that was given to me by several in the Windows camp was to produce evidence of a $20 million dollar business running on something other than Windows that was cheaper and more productive. Prove the cost savings and the efficiencies.”

Link via Cafe au Lait.

The Open-Closed Principle:

Since closure cannot be complete, it must be strategic. That is, the designer must
choose the kinds of changes against which to close his design. This takes a certain amount
of prescience derived from experience. The experienced designer knows the users and the
industry well enough to judge the probability of different kinds of changes. He then makes
sure that the open-closed principle is invoked for the most probable changes.

Eclipse Irritations: .classpath and Phantom Ctrl-Shift-T

At work, the de
facto
IDE is Eclipse. It
seemed like a good idea to check in the .classpath file
— where Eclipse stores all the JAR dependency information for your
project — and there haven’t been that many problems.

Checking in .classpath Stinks

However, when .classpath problems do happen, things
blows up big time. My advice is don’t check that bugger in:
the path system in it isn’t general enough to be group-usable, and CVS
conflicts are just annoying to deal with when they happen.

Loosing Ctrl-Shift-T

Needless to say, I had some .classpath problems today,
and in the process of “fixing” it, I must have re-jiggered some
internal source indexing in Eclipse. As a consequence of whatever I did, I
lost the ability to use the “Open Type” functionality over my own code
base: you know, that nifty little “open up the source for this class”
dialog that you get when you do Ctrl-Shift-T. Sure, all the classes
from my JARs were in there, but none of my project’s high quality code!

To fix it, I closed the project, and then re-opened the
project. Then, when I did Ctrl-Shift-T once again, Eclipse seemed to
be building back up it’s index, and everything was hunky-dorey.

Butter

In closing, yes, I know: if I used only emacs or vi or Notepad or
JEdit or NetBeans or whatever your super-dope IDE is I wouldn’t have
any problems. I’d also have fresh butter churned up for me every 30
minutes. I just don’t have the need for that much butter.

JBoss-Sun Drama

The JBoss Group, the Atlanta, Ga.-based company that controls the development of the open source JBoss J2EE server, has been embroiled in a year-long dispute over the certification of JBoss. Sun would like JBoss to be certified as J2EE compliant, but the JBoss Group says that Sun’s certification process is expensive and ultimately unimportant to their customers.

“Do our customers say, ‘We need you to be certified’? No,” said JBoss Group Director of Sales and Business Development Ben Sabrin. His company is still in discussions with Sun about J2EE certification, he added.

I was mentioning the whole JBoss-Sun stink-up yesterday at lunch,
this is a concise article
on all the drama.

Liscov, Note for Later:

What is wanted here is something like the following substitution property: If
for each object o1 of type S there is an object o2 of type T such that for all
programs P defined in terms of T, the behavior of P is unchanged when o1 is
substituted for o2 then S is a subtype of T.

Also, froman article by Robert C. Martin:

It is only when derived types are
completely substitutable for their base types that functions which use those base types can
be reused with impunity, and the derived types can be changed with impunity.

Re: Intranet Weblogs

Consider: Every business needs to know what its employees know. Companies are crammed with experts on various topics whose knowledge goes to waste — because nobody knows what they know. Now give these workers an internal corporate blog, and encourage them to use it. Let them natter away on every topic that intrigues them. Harvest and index the results. You’ve mapped your workers’ brains.

. . .

”We’re not saying, `We’re going to give this to you, now go off and talk about whatever you want to talk about,’ ” Regan said. He tells his bloggers to focus on computer and networking topics, so they can share information about the problems and solutions they’ve found throughout the state’s computer systems. ”So far we’re pretty happy,” Regan said.

As this article will be erases from the ‘net after a few days, I archived a copy on Wunderkammer.org.