Tuesday, December 11, 2007

Does hoodia work?

Hoodia gordonii (pronounced HOO-dee-ja) is also called hoodia, xhooba,! Khoba, Ghaap, hoodia cactus and cactus desert of South Africa.
Hoodia is a cactus that caused a stir for its ability to suppress appetite and promote weight loss. 60 Minutes, ABC and the BBC, have done stories about hoodia. Hoodia is sold in capsule, liquid, in the form of tea or health food stores and on the Internet. Hoodia gordonii can be found in the semifinals deserts of South Africa, Botswana, Namibia and Angola. Hoodia grows in thickets of green stalks rights and is actually a succulent, not a cactus. It takes about 5 years before the pale purple flowers appear hoodia cactus and can be harvested. While there are 20 types of hoodia, hoodia gordonii only the variety is believed to contain the natural appetite suppressant.
Although hoodia was "discovered" relatively recently, the San Bushmen from the Kalahari Desert which have been eating for a very long time. The Bushmen, who live off the land, cut the stalk and eat hoodia to avoid hunger and thirst during hunting nomadic travels. It is also used to hoodia severe abdominal cramps, hemorrhoids, tuberculosis, indigestion, hypertension and diabetes.
In 1937, a Dutch anthropologist studying the Bushmen San noted that hoodia used to suppress appetite. But it was not until 1963 when scientists at the Council for Scientific and Industrial Research (CSIR), South Africa National Laboratory, began studying hoodia. Initial results are promising -- laboratory animals lost weight after taking hoodia.
The South African scientists, in collaboration with a British company called Phytopharm, the active ingredient isolated from the hoodia, a steroidal glycoside, which is designated p57. After obtaining a patent in 1995, licensed p57 to Phytopharm. Phytopharm has spent more than $ 20 million in research hoodia.
Eventually pharmaceutical giant Pfizer (makers of Viagra) caught wind of the hoodia and became interested in developing a drug hoodia. In 1998, Phytopharm subcontractors licensed the rights to develop p57 to Pfizer for $ 21 million. Pfizer recently returned to the hoodia rights to Phytopharm, which is now working with Unilever.
What you need to know about hoodia
Hoodia appears to suppress appetite
Much of the sense of hoodia started after 60 minutes correspondent Leslie Stahl and crew traveled to Africa to try hoodia. It hired a local Bushman to go with them into the desert and find some hoodia. Stahl ate, and described him as "cucumbery in texture, but not bad." She lost the desire to eat or drink all day. She also did not experience any immediate side effects, such as indigestion or heart palpitations. Stahl concluded, "I must say that did the work."
In animal studies, it is believed hoodia to reduce calorie intake by 30 to 50 percent. A human study showed a reduction in the intake of about 1000 calories per day. However, I have not been able to find any study to read to me and I go in secondhand reports.

6:11 AM 21 comments
Friday, March 18, 2005

What is quality to you? How do you measure quality? When is quality accomplished? If you don't know how your customers would answer these questions, your product probably doesn't meet their needs as well as it could. You can fix this problem.

But first you have to build a paper airplane.

Celeste Yeakley and Jeff Fiebrich talked about the Q-as-in-quality-Files. Here's how the paper airplane fits in: Celeste and Jeff had each of us make a paper airplane. When we were all ready Jeff told us that he was looking to buy one of our creations. He would be willing to pay thirty dollars for one, in fact. Here's the catch: the airplane had to have colorful logos on it (half the planes are now out of the running), it couldn't have a pointy nose (there go most of the rest), and it had to fit in a three-inch-on-a-side box (there go the last two). Jeff's thirty dollars are still in his pocket.

Jeff had a very particular definition of quality when it came to paper airplanes. When we were working on our creations one person asked whether the airplane had to fly, but most of us just went ahead and made an airplane without inquiring as to requirements. We did, however, build airplanes that *we* thought were quality airplanes that met whatever we perceived the requirements to be. The lesson: everybody has a different definition of quality, but your definition of quality is likely different than your customers' definition of quality.

Celeste and Jeff showed us how to bring the two definitions together by building a "House of Quality": Work with your customer to build a list of needs and requirements, where each requirement is ranked according to its importance to them. Then have them rank your product and your competitor products as to how well each meets those needs. All this data is poured into a funky diagram that bears some resemblance to a house (thus the name House of Quality I suppose). Slice and dice the data a bit and then BAM! you have a graphical depiction of how your customers' needs meet up with your design features and how those features compare to your competitors in terms of your customers' needs. Doing this over and over with each of your customers can help you understand what your customers really want and how that interacts with what you're planning to build.

Complex diagrams that make customers think you know what you're doing are always good, right? <g/> Seriously, the techniques Jeff and Celeste introduced this afternoon seem like they could simplify the process of crawling into customers' heads and understanding what they're really looking for. This information of course has to be updated on a regular basis just like any of the other information we gather, but that doesn't reduce its value anymore than the need to maintain unit tests as your code changes makes them less useful. This process is simple and fast enough, in fact, that it would work really well in an agile environment. "Next year at Software Development: Extreme House of Quality!" <g/>

3:34 PM 472 comments

It should be a lot of fun. You get to hear Scott Ambler, David Dossot, Joel Spolsky, Alan Zeichick, and Paul Tyma talk about how they evaluate dev tools, while I provide the comic relief and some tips from my own experience doing the Jolts, New & Noteworthy, and product reviews for SD.

See you in the theater!

[Afterwards]

Hey, it *was* a lot of fun. Those guys are something else. I didn't total up the years of experience on the stage because it would have been too depressing to think about (hint: When Jerry Weinberg showed that slide of punched-card decks in his keynote, I was convinced that one of them was mine), but these guys have definitely walked the walk.

Paul and Joel were in especially interesting positions, because they're not only developers, but they're vendors of tools that developers buy. Paul made a great analogy to reading a novel: Not only does a dev tool have to do its job well (i.e. characters, plotting, climax), but you should be grabbed by the very first page. You should be able to get something useful done right away.

Joel said that to him, the interesting thing was not tools so much as platforms for development, and he made the case for looking out for the little hide-in-the-cracks problems that will kill you down the line. He gave the example of Java AWT 1.0 never quite giving the full platform experience on, say, Windows (accelerators, program icons, etc.). His intent was to motivate evaluators to really look deeply into a platform--or a tool--before you buy it. Look for the weird special cases, the outre things that your applications have to do that probably aren't covered in the "Hello World" samples from the vendor.

David had some good insights, especially about openness and lock-in. Look at how a tool persists its data, or stores config information. Can you make it interoperate with the rest of what you have?

Alan discussed the role of information sources external to the eval: Magazines, forums, other developers, references. (He also said a lot more for which I wish I'd taken better notes, but I was busy trying to look moderate-ly up there).

Soctt talked about some of the dysfunctional organizational obstacles to evaluation. For example: Is the deal already done, and are you just spinning your wheels? There was much more, but you'll probably have to hit Alexa's blog entries to see it!

As for my little spiel...well, I wrote it out word-for-word beforehand, so if you have any interest you can see it here.

2:55 PM 134 comments

When Jeffrey Richter - who has been consulting to Microsoft lo these many years - talks about "Controversial .Net Topics", anyone who is interested in .Net sits up and listens.

The first topic Jeffrey talked about is protecting intellectual property. Tools like Lutz Roeder's Reflector make it super easy to decompile a managed assembly to not just Intermediate Language but to C# or VB.Net code as well. Needless to say, this can be somewhat of a problem if you don't want people figuring out how your app does what it does.

Now, Jeffrey pointed out that most of your app probably isn't worth protecting. As he says, no one really cares how your Copy command works, and even if they do those details are probably not giving you a competitive advantage. For those few parts that *do* give you a competitive advantage, you have a few options:

  • Split those portions out into an unmanaged DLL and use interop to call into it.
  • Use one of the many obfuscator tools to spindle, fold, and mutilate your code.
  • Wait for digital rights management, which Jeffrey characterized as "the real solution", to become real. He indicated that Microsoft will be moving DRM into the .Net runtime at some point in the not-soon-but-not-distant-either future.

The next topic tackled was the efficacy of NGENing. NGENing an assembly, as you might know, compiles that assembly to native code right then rather than waiting until just before the code is executed as normally happens. This is a good thing, since it means the performance hit of compiling said code happens at a well-defined single point in time (i.e., when you run it through NGEN) rather than whenever .Net decides to do so. However, .Net doesn't actually guarantee that it will use that NGENd code because there are various cases that force .Net to recompile the code! So you have to keep the non-NGENd code around anyway. NGENd code has other problems too (did you know NGENing can actually *hurt* your runtime performance by as much as ten percent?). So it's often not really worth the trouble. (Not yet, anyway. Jeffrey says Microsoft is well aware of all these problems and is working hard to solve them. This month's MSDN Magazine covers some of the changes coming in Whidbey.)

Jeffrey finished up by taking on the "common knowledge" that managed code is slower than unmanaged code. This is not necessarily true. Because .Net code is compiled just before it is executed (well, as long as you didn't NGEN it <g/>), the .Net compiler knows all sorts of information about your environment that it uses to optimize the compiled code. All of this adds up to managed code actually being faster than unmanaged code in some few cases today and (Jeffrey assured us) in many if not all cases in the future. As with NGEN, some of these things are coming in Whidbey but others will take longer to arrive.

None of us in the crowd seemed to think any of these topics were particularly controversial (no fistfights broke out, anyway <g/>), but I think it's safe to say we all found them immensely interesting.

11:53 AM 90 comments

If you missed SD's fireside chat with Joel Spolsky (from Joel on Software) you missed some great fun. Joel's articulate, opinionated, and clever writing style translates directly to his on-stage presence. Alexa Weber Morales kept Joel honest quoting his own book to him prodding participation from the audience. Joel's 12 rules of software development had a mixed following from the audience but had some good underlying thought rules.

Joel's quick wit and microsoft war stories made for a fun evening.

9:15 AM 4 comments

Last night we had the Rest vs Soap BOF. Originally when we came up with that BOF it was based on discussions that were flying around on the Internet. Yet in the BOF the debate of Rest vs Soap highlighted what the real debate is about. It is not a debate about Web Services, as both camps would agree that is a good thing. It is not a debate about the technology of Rest vs Soap. It is a debate about how the future Internet will appear. Will it be "Soap" and its WS-* technologies that are transport agnostic? Or will it be what we have today that seems to be working eg TCP, SMTP, HTTP, etc? New World vs Old World? This is interesting as the real debate is not about Web Services, but the future of communications! Hmm, makes you think, and I will be posting something about this in the future, but I need to grok this one.

4:41 AM 94 comments
Thursday, March 17, 2005

Saw Uncle Bob (Martin) drive this Wiki-based acceptance-test thingie around this afternoon, was very impressed. Your QA and business analysis people create "tests" -- really test specifications -- by editing tables in a wiki. Then the programmers throw on some really dead-nuts simple driver classes that call into the appropriate parts of the software under test.

Push the big button: Yellow if something's missing or broken (e.g., test not yet written), red if the test runs but fails, green if it goes. Compose tests on pages; compose pages into sets of pages; compose sets into whatever hierarchical nightmare your greedy little brain desires.

I want this! fitnesse.org

10:55 PM 6 comments

Confused about what service-oriented architecture (SOA) really means? You needn’t be, said David Chappell. All you need to do is look at what the major vendors are implementing, because in the services world, the big vendors define the market—they “rule the world.” And whether or not you understand it, SOA is coming.

At his SD West 2005 keynote speech on Wednesday, Chappell, principal of Chappell & Associates of San Francisco, told his audience that SOA is going to radically change the development world—for the better. (“And if you don’t like change,” he advised the audience, “get out of the software business!”) The major vendors have settled, he said, on a set of standards for interoperable Web services—Simple Object Access Protocol (SOAP) for the underlying communications, the so-called “WS-*” standards for the peripheral protocols—and that “the global agreement on SOAP has the potential to affect software development analogous to the adoption of TCP/IP ten years ago. The result then was enormous change, the global Internet.” In the near future, he predicts, “we’ll see that same sort of ubiquitous connectivity at the application level.” It should be no surprise that this kind of move is coming, he insisted. “We know that pretty much all software we build eventually has to talk to other software. Even lonely applications eventually make friends. Doesn’t it make sense that our default architecture should take into account from the get-go this need for connectivity in logic?” Previous attempts (CORBA, Java RMI, COM, DCOM) have failed, said Chappell, since vendors weren’t all on the same page. But Web services is different, simply because agreement exists. “After years of fighting about it, all the major vendors have finally agreed on how to expose those services.” He predicted that within three years, almost everyone in the audience would be building their applications on services.

Of course, even if all the technical issues were settled, “The real impact of this move to a service-oriented world is with people,” Chappell pointed out. How do you motivate people to make the move? For example, if one group in a company builds their applications which can be adopted by other units…what’s in it for the authors? (At a talk in Zurich, Chappell recalled, he was explaining this social process: “A guy jumped up in the front row: This is communism!”) Chappell reported that in his consulting experience, a massive top-down-directed move to a company-wide SOA would be the best choice from a technical standpoint: all the services could be developed together toward a common goal, with less time spent refactoring. “It’s the way any sane architect would approach the problem, yes?” Most information technology departments start out using a return-on-investment argument, Chappell said. “IT goes to the business, with all the credibility IT has (audience laughter), and says, ‘Mr. Business: Give us $5 million dollars—we’ll SOA the world and come back in three years with great ROI!’ Doesn’t work. SOA arguments based on ROI are DOA.” Instead, he said, in all but a tiny handful of cases, SOA adoption occurs incrementally: An application here, a database exposed as a service there. Of course, Chappell explained, such organizations spend considerable time refactoring their services, because getting services right the very first time is quite hard. Despite the resources wasted in rearchitecting downstream, Chappell observed, incremental adoption is the only realistic hope in most companies.

Chappell went on to the trend toward Business Process Management (BPM), and explained how BPM, Business Rules Engines (BREs) and Web services are entwined. “Application servers by themselves aren’t sufficient to support orchestration”, he said, noting that business process logic needs extra services not provided by traditional application servers. Again, the crucial trend is the convergence of vendor support on a common model: Web services to communicate among the business process logic, the legacy applications, and the new Web-service-enabled applications. It may even enable a new class of developer/analyst hybrid, said Chappell. The major vendors all support graphical development of the business logic that runs within their rules engines (“there’s a standard for this called Business Process Modeling Notation that nobody supports yet”, he said wryly). He stepped quickly through a graphical example of creating business logic in the diagramming notation. “But do you see objects anywhere in here?” he challenged. “Do you see classes? Inheritance?” Clearly the engine will compile it to objects under the hood, he said, but it needn’t be visible to the user. Today’s model of embedding business logic in Java or C++ or C# code is flawed, said Chappell, because the logic becomes scattered throughout source files and is hard to find—even for specialist programmers. But the advent of business rules engines, communicating via Web services and driven by graphical editors usable by nonprogrammers, is “about to go mainstream. Last year Microsoft shipped a rules engine”, he pointed out. “They don’t enter small markets.” This means, he said, that the kind of business analyst who used to work in Visual Basic (“by definition, they’re knowledgeable in the business and not afraid of technology”) will now be able to work directly strictly on the business logic that’s their specialty, rather than having to just pass requirements on to developers.

“This is the strongest force in development, the biggest thing we’re going to face, for the next several years,” he wrapped up. “If you’re open to change…welcome to the service-oriented world.”

10:43 PM 132 comments

“You’re not totally naked in the world,” cryptologist Jon Graff reassured his audience at SD West 2005, “you’re wearing a towel.” By the time that Graff finished his day-long tutorial, “A Lucid and Easily Understood Explanation of Modern Cryptography,” on Monday, March 14, he had shocked attendees by pointing out that even using an ATM can be a minefield of security holes—but he’d also equipped them with mine detectors in the form of a solid grounding in today’s crypto.

Graff, head of architecture in Nokia’s Enterprise Solutions group, knows that cryptology is vital, yet insufficient on its own. “Cryptology definitely does not solve all the world’s problems—except when you’re selling it to a customer,” he quipped. He explained that attention must be paid to technical cryptographic strength, but that’s not enough. Cryptosystems like the Data Encryption Standard (DES)—long sanctioned by the U.S. government—do become obsolete, said Graff, for a variety of reasons, including the accelerating pace of Moore’s Law and the discovery of previously unknown flaws in cryptosystems commonly considered secure. But as a seemingly endless series of examples illustrated, it’s often not the cryptology at fault, but the larger system in which it’s embedded. A case in point: the worldwide automated-teller-machine network. “This thing is secure,” said Graff, one eyebrow raised; “—like a sieve.” Attacks range from the obvious (cameras placed to record the victim punching in his PIN number) to the sophisticated (fraudulent ATM machines that read and record PIN and card number), to the forehead-slapping (most ATMs worldwide, said Graff, depend on the continued secrecy of a single cryptographic key), to scenarios lifted straight from CSI (dusting a stolen card with iron filings to decipher the bar code). Graff’s advice on fake ATMs: “If you’re not sure, enter an incorrect PIN the first time. If the machine accepts it, run away!”

Because of the economics of security and the inertia of already deployed systems, even your car and your cell phone aren’t safe: Bluetooth networking uses only a 40-bit key, which can be cracked by brute force in minutes by a laptop computer. “This is a good example,” said Graff, “of why you can’t layer security on after the fact—it has to be designed in from the first. Security is a way of making things not happen, so if you put it in afterwards, things break.”

Graff went beyond the horror stories into crypto’s inner workings. Crypto has a reputation for fearsome complexity, but Graff led the class bit by bit through the arcana, delivering it in chunks small enough to be digested by “mathematically perplexed people.” Starting with symmetric-key encryption, he showed how key exchange rapidly escalates to a huge problem as the number of communicating entities increases, then elucidated the Kerberos system as a clever solution to the problem. After that, it was on to the mysteries of public-key cryptosystems and the internals of technologies like Secure Sockets Layer (SSL), which many use but few comprehend—unless they’ve attended Graff’s class.

10:41 PM 102 comments

I remember when Web Services were first introduced at SD (about four or five years ago). Back then the Internet Bubble was growing, and things were good. Then the bubble burst and so did Web Services. But, gee, wow, finally Web Services are back on the radar. Finally, it seems tech spending is up again! I suppose the BOF "SOAP vs REST" is going to be interesting tonight.

4:53 PM 122 comments

We were already used to the fact that everything we know is obsolete every two years. Now what? All these familiar things we rely on are actually wrong?

Well, at least, this is what Allen I. Hollub explained this afternoon, to a stunned audience, in a pretty convincing and a very practical manner.

Oh, by the way, we are not fully plain wrong... In fact Allen's pitch was about how what we know about implementation inherence (extends for Java buffs) and accessors (getters/setters) is actually wrong. Okay, Allen could not resist sharing his consideration for EJBs, Struts and the likes, but no-one can blame him for such a diversion.

He made a strong case for agility versus fragility (which reminded me of the fragile manifesto), and explained that by making the code more flexible, we will be truly agile. He made that case by emphasizing how avoiding extends and accessors could help us do that. Here are some key ideas:

  • objects are what they do, not the data they contain,
  • program to interfaces and keep things abstract,
  • extends fragilizes the base class by making subsequent changes risky: prefer encapsulation,
  • accessors are complicated public fields, they are not any better,
  • don't ask an object for data, ask it for help (i.e. make the object do something for you instead of asking data and you doing something with it).
Allen stated that, of course, there are cases when extends is useful (e.g. to avoid code duplication) or when accessors are needed (e.g. at the boundary between the OO world and the outside world).

He added that this approach does not make code smaller nor it completely solves all the problems, but it makes things manageable. No more need to try predicting the future, you will be able to adapt!

Judging by the discussions going on here and there after the class, this speach left no-one insensitive. And that's all the point of this entire week: making us think about what we do and dare improving it.

4:35 PM 4 comments

Any session that starts with the comment "All examples in this talk represent things you should NEVER do" has to be good! This morning Dan Appleman explained his views regarding "Why did they do that?", where "they" refers to the designers of Microsoft .Net. While you can certainly use .Net effectively without understanding the whys and wherefores behind it, I find that understanding such details allows me to use it better.


So why did they do that? Always for a good reason. And regardless of how many nice things result, there are consequences that cause various degrees of pain. For example, take garbage collection. Garbage collection means that programmers don't need to worry about memory management anymore...except that now you don't know with any certainty when an instance will be destroyed, and so if you need it to go away at a specific time you have to explicitly Dispose it, and are things really any simpler now? Maybe not, but when faced with a choice between dealing with IDisposable or trying to remember COM's rules regarding who is responsible for updating an object's reference count when, I'll take IDisposable every time.
Speaking of COM, Dan says "[t]he fundamental problem with COM is that normal people can't understand it".


Although I work at Microsoft I'm not a member of the .Net design team; I don't even play one on TV. Nor do I know how many of Dan's "Why? Because..."s are based on conversations with people on the .Net design team as opposed to his personal conjecture. That said, Dan's explanations seemed plausible and helped me understand why .Net does some of the things it does. Dan's session was a perfect example of the reason I keep coming back to SD: the prospect of a week packed to overflowing with information that helps me do my job better.


And then there's the other reason I keep coming back: an endless supply of throwaway lines like Dan's comment that "Some questions can't be answered yet...Why J#?" <g/>

11:52 AM 85 comments

I've heard the term crossing boundaries in a couple of different Web Services sessions so far and it seems that this term "cross boundaries" needs to have some context applied. Web Services help us to cross many boundaries, such as geography, application, business and even legal. Let's look at each of these in more detail:

  • Geographic boundaries: The Web Services model allows us to easily communicate across machine boundaries and land boundaries. The more interesting one here is how easily we can now cross land boundaries when those boundaries are with countries with whom spoken communications are extremely complex and may require human translators.
  • Application boundaries: This is probably the most commonly held view of the types of boundaries that need to be cross with Web Services. Web Services can be used to allow one application to pass data to another application regardless of the receiver's hardware platform or application architecture.
  • Business boundaries: Often times crossing a business boundary can be more difficult than passing across a geographic or application boundary. Businesses are very conservative about allowing data to pass in or out of its domain; even across departments. There are good security reasons for controlling this, but politics play just as important role as security for definition of these boundaries. Web Services allows each business entity to create well-known interfaces for passing data in and out.
  • Legal boundaries: Government agencies best represent this class of boundaries. Moving data between different government's customs departments for faster processing of transporting goods or, as is the case within the US, passing data between intelligence agencies can be construed as unconstitutional. Web Services are now being accepted as an acceptable means for moving data across these legal boundaries.
There's enough meat here to write a few chapters in a book about SOA, but it seem like a good blog entry as long as I summed the issues quickly.

11:00 AM 54 comments

To the fellow whose product didn't win the Jolt, and who buttonholed me afterwards to tell me how much better your product's features were than the winner's:

Don't do that.

Next year I'll probably be judging your product again. I promise you that I will try as hard as I can next year to forget what you said. Nor will I tell the other judges who you were.

So show some class, OK? Thank you.

8:41 AM 2 comments

On Thursday morning I'll be describing the ins and outs of O/R mapping techniques. The bottom line is that we're using relational technology on the back end and object technology to implement our business functionality, and we need to make these two disparate technologies work together. This isn't too hard when you know what you're doing. I'll be sharing over 10 years of experience of maing objects and RDBs work together.

6:10 AM 4 comments

This morning I hope to attend Mary Poppendieck's talk overviewing product development techniques which will greatly enhance your project efforts. Mary will argue that you need to take a look at the full lifecycle of a project, both the development cycle as well as what happens once it's in production, if you want to get it right. You also need to take an incremental approach to both learning and funding as you proceed during development.

I've attended Mary's talks before, and have been lucky enough to have some very interesting conversations with her, and have always learned something which I can apply in practice with my clients.

6:04 AM 23 comments
Wednesday, March 16, 2005

So tonight, we had three awards ceremonies in a row:

  • Programmer's Paradise Riding the Crest Awards,
  • Dr. Dobbs Journal Excellence In Programming Award,
  • and Software Development Awards.
and, yes, it was great fun. But not only. There was more than that.

I have been deeply touched by the enthousiasm, the intelligence and the quality of the nominees. Whether they were individuals (sheer gurus) or companies (hard working teams), they all shined tonight, when they received their well deserved awards.

Gee, this is a truly jolty week.

9:12 PM 74 comments

This afternoon Randy Miller of Microsoft described how requirements are captured in MSF.He's currently writing "Microsoft Solutions Framework for Agile Software Development: The Definitive Guide to Microsoft's New Agile Methodology" to be published by Addison Wesley in late 2005 (my guess, not sure about that).

MSF was first developed in 1990 and has evolved over the years. Version 3, which you'll find on the web, was released in 2001. MSF is a context-driven approach. See One Size fits None at http://www.sdmagazine.com/documents/s=9575/sdm0503i/sdm0503i.html .

MSF for agile is an adaptable, minimalist approach which you want to stretch to fit instead of shrink to fit. It is a just-in-time approach to development.

One process does not fit all.

If a person has the skills, they may play a role.

The activities within an agile process needs to be able to handle small increments. Use cases can be hard when your iterations/cycles are short, and as a result you often find you have to build a slice of a use case. This can be a problem when your manager wants to know when the use case "is going to be done". Use cases are often too big, we need something smaller (e.g. user stories, features, ...).

The main requirements artifacts in MSF are personas and scenarios.

A persona is a description of a group of typical users. A persona represents a proxy for a user group, providing you with a means to talk and reason about the group through the characteristics of one fictional individual, the persona. For example, "Brendan Burke" would be a persona. Persona descriptions include a name, picture, a list of typical skills, abilities, needs, habits, tasks, and backgrounds of a particular set of users. A persona is fictional reality, collecting together real data describing the important characteristics of a particular user group in a fictional character. See http://www.steptwo.com.au/papers/kmc_personas/ The advantage of personas, if you get them right, is that the developers feel that they understand their users. They also seem to retain the information longer, because it's more personal than just discussing actors. Assuming a persona is a good way to do exploratory testing.

Scenarios are single paths of user interaction through the system. The scenario describes specific steps that they will follow to attempt reaching a goal. You want to wait to write up a scenario -- do just in time modeling in Agile MSF.

Randy overviewed the Kano satisfaction model (http://www.12manage.com/methods_kano_customer_satisfaction_model.html )

Visit http://lab.msdn.microsoft.com/teamsystem/workshop/msfagile/default.aspx to see the current version of MSF Agile

4:39 PM 214 comments

Creators of Python, Perl, SQL, STL and Scheme—all Excellence in Programming award-winners—debate the importance of languages.

Five of IT’s greats participated in a rambling discussion at SD West on Tuesday, March 15, 2005, moderated by Jonathan Erickson, editor in chief of Dr. Dobb’s Journal. The talk was understandably skewed toward languages, given the presence of Python creator Guido van Rossum, Java and Scheme co-creator Guy Steele Jr., Perl creator Larry Wall, SQL co-creator Don Chamberlin and the C++ Standard Template Library creator Alexander Stepanov. Each offered spirited insights into the role of literature, math and learning in software development. Here are highlights:

On the greatest language:

Steele: “I’m tempted to say the Java was the best language of the decade, but I think Javasript has the better claim. The worst languages are HQ9Plus and Befunge. Very few people have ever used them, however, so they haven’t caused much harm.”

Chamberlin: “There’s a tendency to feel like something that’s more complicated is better. Languages today have too many features, and the effect in the aggregate is a decline in the elegance of computer languages.”

Stepanov: “I believe the greatest programmer ever is [Donald] Knuth. Knuth uses C. Let me make a great pitch for C. Why is English a great language? Because Shakespeare and Dickens and Trollope wrote in it. The moment I see great code bases written in Haskell or Perl, for example, then I will revise my view.”

Van Rossum: “English is a great language because it’s everybody’s second language. English is the lingua franca. Underneath Python, Perl, Haskell and the Java Virtual Machine is probably a C implementation that makes it all possible.”

On the role of mathematics:

Wall: “I think there’s a cultural balkanization going on. There have always been language wars and editor wars. There seems to be a breakdown—different realms of computer science are not informing each other. You can blame it on postmodernism if you like. I tend to come from a linguistic side of things rather than the mathematical side, but the two need to talk to each other.”

Van Rossum: “Mathematicians have had very little to add to the world of programming, and in that sense I have to disagree with the concept of mathematics being a foundation for programming.”

On the learning process:

Chamberlin: “I wish I had more time for learning. The most effect way for me to learn something is to teach it.”

Wall: “Try to learn something very different than what you already know. For instance, I’m now learning Japanese, which is different from Indo-European languages. I always thought reverse Polish notation was very strange, until I learned Japanese and realized that they speak reverse Polish notation.”

3:58 PM 92 comments

Refactoring I'm familiar with. Ken Pugh I'm familiar with. Ken Pugh talking about prefactoring sounded interesting. And it was!


You may not know what prefactoring is. I didn't. Ken defines prefactoring as "developing code that reduces the need to refactor".

Oh, I get it. Ken's talking about good coding principles!

Sure enough, that's exactly what this talk was about. A few that stood out for me:

  • Know the assumptions you're making. When serializing a spreadsheet out to disk is it better to write it out be rows or by columns? The "correct" answer completely depends on your current context. Make your implicit assumptions explicit and then make your decisions based on that full set of information.
  • When in doubt, choose the option that will make it easier to change things in the future. For example, it's easier to combine concepts than to split them apart. Gerry Weinberg (kibitzing from amidst the crowd) generalized this to "It's easier to lose information than to find it". Similarly, pretend there are no primitives and make classes for everything. If Account changes from an int to a string, do you really want to inspect every int in your application to decide whether it needs to be changed?
  • Define every class using a single sentence. When contemplating adding a responsibility to a class, vet it against that sentence to determine whether it really fits in the class. Good names are still important, but a succinct definition of a class's purpose is extremely useful even if it's name is the most perfect name imaginable.
  • Separate concepts into classes based on behavior, not data. Inheritance hierarchies whose lower levels differ only in the data they return are misusing inheritance. Instead, create a single class that is configured with the differing data.
  • Write single purpose functions. Ken takes this further than the common "a method should do exactly one thing and no more" and advises that you separate the policy of a method (what it does) from its implementation (how it does it). For example, don't embed blocks of code in the conditionals and various branches of an if statement but instead call methods that encapsulate that code. This is just one part of a larger practice Ken also espouses: intentional programming. Code that has been programmed intentionally says what it means and means what it says and is more understandable than code that has just been written.
  • Be sure you're using the entire interface when you inherit. If you're just using part of the interface containment is likely a better option. Along the same lines, split a single interface into multiple interfaces if multiple clients need different parts of the interface.

Now if Ken could just show me how to prefactor my way out of maintenance altogether...<g/>

3:39 PM 53 comments

Jennitta Andrea is gave a talk entitled Agile Requirements this afternoon. Interesting points she made:
1. You don't want to document, or say, the same thing multiple times. You want to eliminate redundant artifacts. e.g. Single Source Information
2. Diagrams/models definitely have a place, including those of the UML. e.g. Apply the Right Artifact
3. You need to decide how much detail you want to include in an artifact. e.g. Model With A Purpose and know your audience and what their needs actually are. You could just give an artifact a name, an outline, or full detail. Do just enough documentation.
4. You can reduce the formality of your artifacts. The greater the formality, the greater the amount of work you'll need to do. The notation/format used will be determined in part by the audience for an artifact as well. e.g. Models/documents should be just barely good enough.
5. Start your documentation as late as possible.
6. Retire an artifact that is no longer needed. e.g. Discard Temporary Models.
7. There are some documents that you will need to keep up to date. Some artifacts will be permanent.
8. Many people consider acceptance tests to be executable requirements.
9. Deliver incrementally. Supports change much easier, promotes feedback, increases the chance that you build the right thing and spend the money wisely. You want to do the most valuable work first ( see http://www.agilemodeling.com/essays/agileRequirements.htm#ChangeManagement ), do technically risky things first, and do work with the lowest likelihood of change earlier.
10. You want to reduce hand-offs. IMHO, hand-offs between people/groups is a process smell. Having a stakeholder talk to a BA, who writes a doc to give to the developers isn't as good of a way of working has having the stakeholders work directly with developers.
11. When there is great communication between developers and stakeholders, you don't really need a business analyst (my caveat -- BAs can still add value because they might know more effective techniques which they can share with others).

2:11 PM 74 comments

First up today: Elisabeth Hendrickson discussing how in the world testing can survive in an Agile world. After all, Test traditionally uses all that time it takes Dev to give us something useful to write and review test plans, test specs, and test cases. Then, once we finally get our hands on the app, we feverishly run through all this as fast as we can in hopes of getting through all the really important test cases and finding most of the really nasty bugs before management decides the app is good enough to ship.

A tester on an agile team doesn't have time for all this. Dev is passing code off to Test on a daily if not hourly basis. What's more, testing can seem unnecessary when your developers are using test-driven development and routinely writing very comprehensive unit tests that come very close to 100% code coverage on even the hard-to-hit error handling code. What's a tester to do?

If your entire organization is willing to embrace agility and be responsible for quality, then you're in position to agilify your testing. Elisabeth summarizes this as transforming testing from being the last bulwark of defense protecting the user from those darned developers to supporting your team in producing a great product.

Elisabeth's session really spoke to me because while my team isn't officially agile we are trying to make exactly that change. While I was happy to find that we are already doing some of the things she suggested, I also came away with many great ideas. Highlights:

  • Design for maintainability. A lot of the work developers do is aimed towards making their code maintainable; testers should do the same. Don't Repeat Yourself. Use generic checklists (I call these Did I Remember To Lists) rather than cut-and-pasting these into each and every test case.
  • Keep your test documentation as simple as possible. Consider whether you really need three pages of detailed test steps describing every last detail or whether summarizing the actions to take would be sufficient.
  • Make testing part of daily life, not a separate phase that only certain people do.
  • Remember that it's a team, not just developers and testers sitting next to each other. Get them talking with each other and collaborating on everything from brainstorming test cases to writing test infrastructure.

If you:

  • Focus on providing value to key stakeholders
  • Shift from being the last line of defense to providing an information service
  • Aggressively reduce time and resources spent on anything that does not directly contribute to providing information
  • Collaborate with programmers to improve testability and leverage test automation efforts

your testing will be more agile.

Hmmmm...these sound like good ideas even if your team isn't agile!

9:43 AM 16 comments

In this keynote David Chappell promises to provide insights into an SOA approach to development and how it's affecting the way that we work. It should prove interesting.

8:22 AM 72 comments

In this talk I'll be overviewing the EUP (www.enterpriseunifiedprocess.com), an extension to the RUP to make it a full IT lifecycle. The RUP is just a system development lifecycle, which is perfectly fine, but it's not sufficient if you truly want to have a consistent process within your IT department. I'll discuss how to extend the RUP to include operations and support, enterprise architecture, strategic reuse, portfolio management, enterprise/data/security administration, people management, and enterprise business modeling activities. There is far more to IT than system development. Hope you find the talk interesting.

8:18 AM 10 comments

On Wednesday morning Elisabeth Hendrickson of Quality Tree (www.qualitytree.com) is giving a talk entitled Agile QA? which describes how QA teams can work effectively with agile development teams. Not only is this possible, it's absolutely critical to your success (IMHO). Should be an interesting talk, and I'll add comments as the talk progresses.

8:15 AM 39 comments

SOA, BPM... SOS!

What's really hidden behind these two guys? How can you leverage the second (Business Process Modeling) to build the first (Service Oriented Architecture)? How can this be done now, in a pragmatic way?

I suggest you attend Michael Rosen's class this morning to get replies to these questions. For those who can't, I will post some tips in this topic later on...

7:00 AM 165 comments

Caught Josh Kerievsky's tutorial Tuesday. If you missed it--too bad for you! If you missed it and don't have the book, do like the guy who sat next to me: Jump up, run out of the class, and buy it.

Basically, what Kerievsky does is take patterns out of their religious context, saying that of the people who actually get design patterns, too many are "patterns-happy" and want to build them in regardless of need.

The trick is to wait. YAGNI, right? The simplest thing that could possibly work? When the code starts to smell, you'll need to refactor; at that point, consider your refactoring in the context of patterns.

That's the central takeaway from the class, and the book. But it's a biggie.

Note that I said "in the context of", not necessarily "to implement". Kerievsky points out that sometimes, the refactoring just isn't worth it unless you build it to the pattern (although he railed about slavish adherence to the "structure" examples in GoF and other pattern books--you don't have to build every special case and widget in just 'cause it's in the book!). But other times, as you refactor, you get to a point where the smell goes away before you've implemented the official pattern. Stop there. Kerievsky calls that refactoring "towards" a pattern--that's good too. Finally, he shows where you might refactor away from a pattern (the example he showed was converting a Singleton instance to a considerably-simpler inline implementation).

His slides and demos showed lists of smells, relating them to specific refactorings, and showed how to perform the mechanics. Moving, say, embellishments into a Decorator pattern involves multiple refactoring steps, not just one pull of the Eclipse lever, and he gives a good account of how to get there (as well as an appreciation of the process of developing and improving your own).

Me, I've got the book. Actually, I've got two, I keep loaning it out.

6:51 AM 11 comments
Tuesday, March 15, 2005

Is it just me? Or does Silicon Valley really feel like it's coming back?

Now, my first SD West shindig, as it happened, was on the very cusp of The Bubble. Remember the New Economy? I was hauling butt skyward with it. The magazine put me up at a posh hotel. For registration swag, I got a courier bag so nice I'm still using it. The vendors threw a party where they hired a whole club, and a band to play live. Everybody was all abuzz on deploying massive enterprise integration projects.

Fast-forward to next year. I stayed at a motel. At the Editorial Board meeting, one of magazine's stalwarts--a guy I'd admired for years before I ever started writing for SD--frankly begged for a job. The show floor had to be partitioned off so the remaining vendors wouldn't look so lonely as they grabbed desperately at anyone wandering through. I got a coffee mug in my registration packet. The attendees scurried about, looking hunted, and were suddenly fascinated by open-source tools.

This year...I'm staying in that place with the $90 bathrobe. (Thanks, SD.) At the show-floor party tonight, I had to elbow my way through *crowds*. Throngs, even, and the vendors had so many developers in their booths that we self-appointed Lords of the Press could hardly get a word in edgewise.

Kids, I think we're back. Now let's do it right this time.

8:19 PM 4 comments

Scott Meyers. A standing room only crowd. Scott Meyers. Eight hours of Scott Meyers. Fun!

Scott's all-day tutorial was titled "Better Software -- No Matter What". To summarize in a single sentence: quality is important. No surprise, right? The reason this was an all-day talk is the many specifics of what this means to Scott. Key points for me:

  • All defects are worth addressing. If correction of a defect is not feasible, consider whether you can prevent it outright or at least prevent customers from running into it.
  • Following quality practices does not have to cost you time. In fact, it's likely to save you time by eliminating rework and debugging.
  • Useful specifications are of utmost importance. This does not necessarily mean you need reams of pages of documentation. It does mean that you should ensure that when you sit down to implement a feature you have some form of spec that is complete and detailed enough to answer questions during design and implementation, unambiguous enough that everybody is likely to interpret it in the same way, and taken seriously by everyone involved. If your spec doesn't meet these criteria, you have a professional responsibility to speak up and suggest ways to make things right.
  • Interfaces should be easy to use correctly and hard to use incorrectly. Scott calls this the most important general software design guideline. He suggests that you adhere to the principle of least astonishment and work to maximize the likelihood that user expectations about the interface are correct. Good names, consistency, and custom types are your friends in this arena.
  • Static analysis (the process of inspecting source code for errors) is a Good Thing. Compiler warnings, lint and similar utilities (e.g. FxCop for managed code), and code reviews are all excellent tools you have to be crazy to not use.
  • Don't impose limits when you don't have to. You know those websites that don't let you put spaces in your credit card number or assume your last name is shorter than some number of characters? Scott calls these keyholes, because they force a certain view of the world on you (as does a keyhole when you peer through it). Keyholes are Bad.
  • Don't duplicate. Need I say more?
  • Consider test-driven development and develop iteratively. These are especial favorites of mine as I've found them to have a huge positive impact on the quality of my code.

Scott finished up by reminding us that while yes, each of us is special, none of us is so special that these guidelines don't apply. Yes, even you. <g/>

4:49 PM 312 comments

Birds-Of-(a)-Feather sessions are like sushi - every really good or really ya know, not-good. Not often much middle ground. I attended the BOF last nite regarding Microsoft vs. Sun solutions. The details weren't spelled out so probably every aspect of these solutions were discussed (I think we missed MSFT mice vs. Sun mice however).

As soon as I sat down I saw we had amongst the crowd 2 speakers Christian Gross and Ken Pugh. Sometimes you get a feeling in life that something is just "gonna be good". But if you see those two guys in a BOF - you don't need the feeling. It IS going to be good.

As expected those two guys hit the ground running with Christian's indeterminable accent and Ken's booming voice setting the pace.

The BOF probably came to the conclusion it should have - you should always be using the BEST solution - who makes it is not necessarily relevant. Of course "best" is unique to your situation.

I definitely recommend checking out the program and finding the talks by Christian and Ken and penciling them into your schedule. You won't be sorry. And if you see a BOF with these guys yelling and waving their hands in the air - grab a beer and some pretzels - you're in for a show.

Paul

8:55 AM 2 comments

David Hecksel, senior Java architect at Sun Microsystems, moderated last night's panel discussion on agility. His panelists were Net Objectives senior consultant Dan Rawsthorne, Software Development columnist Scott Ambler, Effective Java author Joshua Bloch and Microsoft Visual Studio Team System program manager Granville Miller. The question-and-answer portion went quite well, with unexpected tangents from audience and experts into gender issues, people skills, software development vs. university-taught computer science. But Hecksel also plugged his own patterns for methodology, methodology selection, team composition and management at www.davidhecksel.com.

Apparently, he's seeking a patent for a “System and Method for Software Methodology Evaluation and Selection.” It seems he's wading into the business method patent territory (remember the Amazon one-click purchasing method patent uproar?). Hecksel also has defined a "Hecksel Agility Index," which appears in the December 2004 patent application as the "Agility score":

System and method for evaluating and selecting methodologies for software development projects that may be used in selecting an appropriate development process (methodology) for software projects from among various methodologies. A project context for a project may be defined. Attribute values for one or more attributes of one or more components of the project context may be determined. An Agility score for the project context may be generated from the determined attribute values. In one embodiment, a project context may be scored against each candidate methodology. The Agility score may be applied to an Agility curve for the project context to determine a best-fit methodology for the project from a plurality of methodologies. In one embodiment, the plurality of methodologies may include methodologies ranging from lightweight to heavyweight methodologies. In one embodiment, the plurality of methodologies may include one or more Agile
methodologies.

What does Hecksel intend to do with this patent? This is an interesting new wrinkle in the agile trend, and the first patent I'm aware of on the topic. Post your thoughts, and stay tuned for my full article on the panel in the SD Express show daily and on our website at www.sdmagazine.com.

8:40 AM 151 comments

Today I'm attempting an all-day tutorial: Better Software -- No Matter What. Speaking is Scott Meyers, author of the classic Effective C++ book series. Scott is always entertaining and never pulls his punches, so this should be a fun day.


Unfortunately for latecomers, Scott's talks are always well-attended, often to the point of standing room only crowds. The conference planners know this and so today's tutorial is in a Room Of Considerable Size, but still: come early or be prepared to stand for eight hours straight!

8:28 AM 6 comments

This morning I'm sitting in on the first part of Karl Wieger's tutorial. As you can imagine I'm a firm believer in understanding and hopefully validating (IMHO best way to do that is via working software) requirements. Karl really knows his stuff, so if you get a chance drop by this tutorial.

I'll post a few observations later today.

8:17 AM 353 comments
Monday, March 14, 2005

How often have you asked yourself: "how can I test this" or "should I really test that"? Then concluded that testing of such or such component was impossible or useless, which left you with a bitter feeling of incompleteness?

Rejoice! Joe Rainsberger masterly addressed these questions today in a full-day tutorial during which he first demystified many prejudices about J2EE testing then presented how to implement these tests with in-depth programming sessions.

Here are a few tips in case you missed this cool presentation:

  • test only what makes sense to test (assume that J2EE vendors do their job on testing their part),
  • test everything that makes sense to test (do not assume that test blind spots are an option),
  • have your tests running fast enough so they can run often.
Okay, this is pretty general, so here are some more technical hints:
  • always program to interfaces so you have a chance to subsitute an implementation with another one,
  • follow the Hollywood Principle and avoid Service Locators when designing your application (this will increase the capacity to substitute an implementation with another one, while reducing overall dependencies between components and technologies),
  • when testing, use mock objects to replace complex infrastrutural components with implementations that act the same way,
  • Models, Views and Controllers can all be tested outside of a J2EE container, so test them all,
  • have your views produce XHTML so you can leverage XPath for testing them,
  • do not be afraid of the database: after all it is just a repository, you can mock-it up!
  • when testing messaging, use the simplest available implementations of a message call in Java, i.e. method calls.
Joe provided many more advices but I am still too overwhelmed to be able to write them all down! So I hope some of the audience members could post some more hints in the comments of this topic.

5:35 PM 50 comments

This afternoon was a half-day tutorial on exploratory testing. Elisabeth Hendrickson gave an engaging talk - complete with in-class exploratory testing - that presented four techniques for doing successful exploratory testing:

  • variables (anything whose value can be changed, e.g., window size), values (a value given to a variable), and heuristics (rules of thumb that lead you to good tests for particular types of values, e.g., "It you can count it, try zero, one, and many" or "If it has an order, try first, middle, last"): List everything you can change and then use hueristics to develop sets of values to try for each variable.
  • state models: As you identify possible states think about how the actions you can and can't take change and how the results of those actions change. As you identify events that transfer you between states, think about various triggers for each of those events. Now use that state model to apply every controllable event to every state (regardless of whether it seems interesting) and force exits from every state.
  • nouns and verbs: Describe feature interactions as nouns (what kinds of things can you create or manage) and verbs (what can different types of users do); use adjectives and adverbs to further describe the nouns and verbs (i.e., the values you could give the variables you identified). Often the grammar you develop will be rich enough to enable you to write stories about your software - stories that you can easily translate to test cases. Writing these stories is simple: randomly pick a series of nouns and verbs and then turn these words into a short action sequence. (Yes, this sounds a whole lot like model-based testing - one more way to develop those models.)
  • personae and soap operas: Identify several types of users with distinct needs and personalities. Inhabit each persona and use your app the way that person would - i.e., find the same bugs they will. A very helpful technique for doing this is to write soap opera scripts about your user and their travails and triumphs with your software.

Throughout Elisabeth made sure we understood very well that exploratory testing is *not* just pounding on the keyboard but rather is a means of discovering new information (aka surprises - which are not necessary bugs, just something you didn't realize about your product) about the software under test by methodically exploring its capabilities and limitations.

Lectures are all well and good, and some people can actually learn that way, but I need to apply what I hear to actually internalize it. Elisabeth handed out a bunch of Pixter toys - a roughly PDA-sized electronic (black and white) coloring book. Once we figured out how to turn the speaker off these turned out to be surprisingly (ha!) capable little buggers that served as excellent foils for applying the techniques we were learning.

Perhaps the most important piece of information was that exploratory testing is best done in packs. Each group found that the other groups had identified variables they hadn't. The same holds true for your app: one person doing exploratory testing is good, two people doing exploratory testing is better, the whole Dev+Test+Everybody Else team doing exploratory testing is better yet, and the whole team doing exploratory testing in pairs is best. Case in point: one team found a diagnostic mode -- something no one else in the other twenty-some classes Elisabeth has used Pixter in had ever done!

4:39 PM 98 comments

We welcome Michael J. Hunter, a test technical lead at Microsoft and longtime SD conference veteran (he's attended 9 conferences!). He's agreed to cross-post his musings here as well as on his own page. You can find his blog at http://blogs.msdn.com/micahel. Thanks, Michael!

4:31 PM 82 comments

Today's opening keynote by author and consultant Gerald Weinberg, "Fifty Years of Software Development--Lessons From the Ancients," was an entertaining romp through his early years at IBM. Here are some highlights:

Thinking like a computer:
Hired early on as a "computer," or person who performed mathematical computations by hand, Weinberg said that it had affected him ever since. "When I perform an operation, I think like a computer. I can look at a program and see if it will perform well or poorly. How many in the audience know what I'm talking about?" There was a smattering of hands raised.

Making transitions:
He noted how at myriad junctures in IBM's history, people were lost: For example, when the first disk drive, RAMAC, was developed, some people didn't make the transition. When the change from paper tape to punch cards happened, some didn't like that change. When the move to programming on a screen rather that with wires on a board happened, some were left behind.

The origins of open source:
Though some like to claim it is a recent invention, he mentioned an early organization, SHARE, which was an open consortium for interchanging source code. [I'll have to look into that in greater detail--I'm interested in writing a piece on the history of open source, going back earlier than most do.]

The importance of programmers:
In the 60s, at a company meeting of 500 or so programmers, he overheard two executives say, "It can't get any worse than this," meaning there couldn't ever be more programmers hired. "I'm happy to say that now you can't fit all the IBM programmers in one room."

Recognizing star developers:
Pointing to a slide showing several decks of punch cards, he said that the one with two rubber bands holding it together was the one by the best programmer, because that was the configuration management system of the day, and he knew that you had to fully use all of it, not disable some of the functionality by trying to get away with just one rubber band.

IBM's arrogance:
Early on, Big Blue refused to sell its computers. In one instance, it turned down a handsome purchase offer because the company in question wanted to perform a calculation that only took one hour a day. IBM execs felt that was a waste of the awesome power of the machine. "But I'd had some sales training before that, and I started working on the program, and when I was done with it it took 8 hours a day to do the calculation [audience laughter]."

I'd be interested in hearing from audience members as to what they enjoyed most about Weinberg's talk. Feel free to post comments here!

1:57 PM 206 comments

On Monday from 5:30 to 6:30 (or so) we're holding a panel on agile software development. We're going to cover, we hope, several issues:
1. Methodology Selection.
2. How do I formulate and document project requirements?
3. Do programmers need to do documentation?
4. Do Tools help you slim down a process?
5. Is eXtreme Programming applicable to all projects?
6. What is the role of application architecture within a project? (time permitting)

My hope is that this should be an interesting panel. Please come prepared to challenge us with interesting questions.

1:08 PM 79 comments

I'm at Software Development West this week. My brain always hurts by the end of the week (this is my eighth SD) but right now that pain is far off. First up is a half-day tutorial on Domain Driven Design presented by Eric Evans (author of the book by the same name that is sitting in my to-read pile). In some ways this has been a morning of "yup, knew that, of course, doesn't everybody", since the whole point of OO is that we can program in the terms of the problem rather than the terms of the technologies and languages being used to implement the solution. One big point that Eric makes, though, is that developers tend to dive down into implementation details as soon and as fast as possible, but that we need to stay up at that higher level domain view that the business experts understand. (Because everybody knows that business experts don't understand pointers and database tables and n-tier application stacks...or worse, they go off and learn all that stuff and then start delivering requirements docs as schemas and UML! )


To quickly summarize a four-hour session into something like four sentences:

  • Design should be in terms of business concepts.
  • A primary goal of domain driven design is to develop an ubiquitous language - a language that is used by every member of the team to talk about every aspect of the project. On one project that Eric worked on, they made a significant change to their model that caused their not-the-original release date to slip another three weeks. This change, however, brought about a language that became so ubiquitous that marketing used terms from the model to talk about their product!
  • A model is a system of abstractions that describes selected aspects of a domain and can be used to solve problems related to that domain.
  • A model serves a particular use. It does not have to be as realistic as possible. It does have to allow software to enter the domain. An example Eric gave of this is the well-known Mercator projection map of the world. This map is useless for comparing relative sizes of land masses, but if you want to navigate between land masses it's just the ticket.
  • A model is irrelevant if it isn't reflected in code.
  • Models that reflect their domain don't have classes that end in "or" or "er". For example, a design to handle monetary instruments would talk about assets and accrual schedules, not accrual calculators.

11:00 AM 125 comments

This morning I ran a workshop overviewing UML 2. My approach is pretty straightforward. First I'll overview the thirteen modeling techniques, I actually have detailed descriptions posted at http://www.agilemodeling.com/essays/umlDiagrams.htm, and then the tutorial participants are going to be asked to discuss (in smaller groups) three fundamental questions:
1. What's actually useful in UML 2.
2. What's still missing?
3. How do you effectively model in the real world?

I'll then wrap up with a quick discussion of Agile Model Driven Development (AMDD), see www.agilemodeling.com/essays/amdd.htm .

My hope is that the workshop will prove to be interesting, and hope to see you there. This evening I'll post to this blog the results of the group discussions.

7:22 AM 132 comments

Today myself, Pramod Sadalage, and Nick Ashley (both of ThoughtWorks) are giving a tutorial on applying many of the techniques which I wrote about in Agile Database Techniques (www.ambysoft.com/agileDatabaseTechniques.html). We'll be talking about agile data modeling, the need for data professionals and developers to work together in an effective manner, and how to organize your database(s) to support these things. More importantly we're going to do live demos of how to set up a developer's workstation with the current version of the development database, go through several database refactorings to show you step by step how to do it, and show how to deploy into production.

I hope you can come to this tutorial this afternoon. Afterwards I'll post my observations as to how it went.

7:12 AM 19 comments
Tuesday, February 15, 2005

I was recently asked for my opinion on the role of transformation in enterprise service buses. The person requesting my opinon had come from an EAI background and was under the assumption that the ESBs with transformational capabilities were a better tool for integration. I explained to this person that with Services-Oriented Integration, many of the hurdles to integration that required the services of a big EAI hub were now being removed by the services, such as removal of proprietary formats, incompatible data types and interminable data structures. While there are still cases, mostly B2B, where one document type may need to be transformed into another completely different structure and gaps filled in, these types of requirements can now be satisfied by Business Process Management tools. In this type of environment, the ESB is responsible for ensuring enterprise qualities-of-service on behalf of the services. Therefore, transformation plays a much diminished role as part of an ESB.

4:44 PM 158 comments