Archive for The Project

Introducing Hydrocon

Posted in smalltalk with tags , , on April 29, 2009 by moffdub

You may recall me mentioning in passing some attempt at re-implementing The Project in C#. You may also recall me later reflecting on the several false starts I did along the way, which eventually led me to abandoning these attempts.

Lately, though, there has been an urge building in me, kind of like lower bowel movements. I feel like most of my obsessive questioning of how to apply domain-driven design and object-orientation has settled down. Part of the reason why I kept starting and stopping my earlier attempts at re-implementing The Project was because of how little I knew, and that was because of how little experience I had doing it the right way.

I think I’m ready for another try. This attempt at re-implementing The Project shall be code-named “Hydrocon.” Why? I don’t know. Further, this will be written in Smalltalk. There is no reason to learn yet-another-curly-brace language.

Continue reading

Dynamic Enumerations, revisited

Posted in The Project with tags , , , , on January 14, 2009 by moffdub

Announcer: Now, it’s time for the custodian of clean code, the man who codes what he means and means what he codes, El Moffdo!

That’s correct Mr. Announcer, I am your host, designing with a mere pad and pen just to give the bugs a chance. This is the long-awaited sequel to the oft-searched topic of dynamic enumerations. Let’s go.

First, why is this sequel being written? I have learned that the general problem of a variable’s allowed values being restricted to a finite set that can grow or shrink at run-time is not a unique one; it goes by the alias of the Allowed Value Table (AVT). The quest for solutions to this problem is the source of many of the search engine referrals to this blog.

Continue reading

Dynamic State Pattern

Posted in Java with tags , , , on July 30, 2008 by moffdub

Skip the bloggy parts and head to the pattern.

Background

My frequency of posting had decreased slightly since my first two-post night, due mainly to still going through the boot-up sequence at work.

I do a lot of “sitting there seemingly drinking coffee”. In fact, I am typing this post up as I sit here waiting for a co-worker to arrive and give me work.

You may recall me mentioning that I had started re-coding core parts of The Project. My laundry list of complaints against my ignorant self included:

  • anemic domain model
  • TDD not followed
  • testing not isolated
  • one huge controller class for all UI windows AND interaction with domain and infra layers
  • generally, patterns not used where they could’ve been used
  • abuse of getters
  • no clear demarcation, in source files, of layers
  • only half of the Dynamic Enumeration pattern implemented
  • validation and other rules not encapsulated as Rule objects

… and if I went through each source file of The Project, I could easily triple the size of this list.

I put that on hold as work started. But the slow start-up period has not provided a sufficient outlet for my neurosis. This lack of coding/design activity accumulated until I eventually had to cash the check.

As I returned back to my second attempt at The Project, I was kind of disgusted. So disgusted that I scrapped it and started over — again.

I was disgusted mainly because it seemed like my domain model suffered from domain anemia. I had setters for properties, which provided some validation logic. Getters were regulated. But for some objects, that was it. I had started coding first and then tried to get the code to read like English sentences.

This time, before writing a line of code, I tried some organic object-oriented analysis and design. I put on my customer hat and started recording sentences of functionality.

  • “Users describe equipment with a description.”
  • “Users add equipment into one and only one room.”
  • “Users update equipment with modifications to a description.”

and so on.

Then I annotated each sentence with its code equivalent, minding layers, SRP, DDD, and such.

  • “Users describe equipment with a description.”
    User somebody --> equipment.describe(description);
  • “Users put equipment into one and only one room.”
    User somebody --> room.put(equipment);
  • “Users update equipment with modifications to a description.”
    User somebody --> equipment.update(description);

(the “–>” is supposed to mean that the class on the left calls the method on the right)

Note the utter absence of getters (for now) and setters. Where I had used setters, I am now using describe() and update(). Very nice. It even led me to an interesting de-coupling of instance member names from client code.

Motivation

Now, finally the point of the post. Equipment have a lifecycle status: Spare, In use, In repair, In transit, Broken, and Unknown. Originally and in the first remake, this was just another Dynamic Enumeration. But if I had been a good requirements analyst, I would’ve realized that the business never went from state to state willy-nilly.

Equipment would only go from Spare to In use, In repair, or In transit. It would never go to Broken because nobody would keep broken spare equipment. This and other insights all pointed to the State pattern.

Why must there always be a problem? The State pattern is excellent when you know the states and transitions ahead of time. This was not the case for The Project. Administrators had to be able to at least edit and add new lifecycles. Sounds like I need a Dynamic State pattern.

The Pattern

The State pattern involves creating a base state class and subclasses, one for each of your states. The client object that is tracking state passes itself to the constructor of the initial state (subclass).

Transitions are specified in each subclass as a method bearing the name of a valid transition. In these methods, the client object is “called back” to notify it of a state being changed, so it can do whatever it needs to do to handle that event. Then, the client object’s state variable is updated by the subclass to the target of the transition.

For this to be dynamic, the entire state graph will have to be created at run-time. As far as the client object goes, the dynamic nature of the state graph will affect the callback mechanism. I have client objects implement a StateClient interface.

public interface StateClient
{
	public void changeState(State nextState);
	public void callback(String stateVariableName, String transitionName);
}

Implications for implementers of StateClient: a mapping between transition names and a function that handles the transition has to be maintained. Here is where this pattern is a bit weak.

Without much effort, the best you’ll be able to do here is something that does not involve unique logic for one or more states, unless you want to hard-code some of the states ahead of time.

If you add a new state on-the-fly that has unique logic to it, you probably need to invest in more than a mapping between transition and function. I’m thinking you’ll need an interface instead, with a file or table somewhere defining which implementations map to which transition names, and inject it at run-time.

One other alternative to this approach would be to swap out the entire implementation of StateClient whenever there is a change, which may or may not be a viable option.

Instead of specific subclasses, have one State class for which you can specify a name. The same goes for transitions: give each transition a name that maps to a new State object.

public interface State
{
	public void addTransition(String transitionName, State targetState);
	public void invokeTransition(String transitionName, StateClient parent);
}

Implications for implementers of State: a mapping between transition names and target states has to be maintained and done statically at application start-up. This is the reason why the parent has to be injected when a transition is invoked.

Further, behind the scenes, you’ll probably want to keep track of a “state identifier”, i.e. data about the state specific to the implementer; basically, information about the state that is specified at run-time. In my case, this would be an EnumValue, and I provide a method getOrder() to retrieve drop-down menu order.

Example

Never the hand-waver, you can download an example implementation in Java. You should get this output:

YQ1817H981 is now a pending order
YQ1817H981 is now a new order
YQ1817H981 is now a pending order
Illegal transition from Pending to fjeiofje
No transition handler defined for Active Order
YQ1817H981 is now a shipped order
Illegal transition from Shipped to New Order

Some classes of note:

import java.util.*;

public class OrderState implements State
{
	private HashMap transitionMap;
	private String name;
	private String stateVariableName;
	
	public OrderState(String stateName, String stateVariableName)
	{
		this.transitionMap = new HashMap();
		this.name = stateName;
		this.stateVariableName = stateVariableName;
	}
		
	public void addTransition(String transitionName, State nextState)
	{
		this.transitionMap.put(transitionName, nextState);
	}
	public void invokeTransition(String transitionName, StateClient parent)
		throws IllegalTransitionException, NoTransitionFoundException,
		UndefinedStateVariableException
	{		
		if(this.transitionMap.get(transitionName) == null)
			throw new IllegalTransitionException(this.name, transitionName);
		
		parent.callback(this.stateVariableName, transitionName);	
		parent.changeState((State)this.transitionMap.get(transitionName));
	}
}

and the (anemic) domain object Order:

import java.util.*;

public class Order implements StateClient 
{
	private State orderStatus;	
	private String name;
	
	private HashMap orderStatusTransitionHandlerMap;
	private HashMap stateVariableMap;
	
	public Order(String name)
	{
		this.orderStatusTransitionHandlerMap = new HashMap();
		this.stateVariableMap = new HashMap();
		
		this.name = name;
		
		this.orderStatus = Runner.newOrderState;
		
		// this should be filled externally by an IoC container
		this.orderStatusTransitionHandlerMap.put("New Order", new NewOrderAction(this));
		this.orderStatusTransitionHandlerMap.put("Ship Order", new ShippedOrderAction(this));
		this.orderStatusTransitionHandlerMap.put("Pending Order", new PendingOrderAction(this));
		
		// a hashmap of hashmaps
		this.stateVariableMap.put("Order Status", this.orderStatusTransitionHandlerMap);
	}
	
	public void changeOrderStatus(String transitionName)
		throws IllegalTransitionException, NoTransitionFoundException,
			   UndefinedStateVariableException
	{
		this.orderStatus.invokeTransition(transitionName, this);
	}
	
	public void changeState(State nextState)
	{
		this.orderStatus = nextState;
	}
	
	public void callback(String stateVariableName, String transitionName) 
		throws NoTransitionFoundException, UndefinedStateVariableException
	{
		if(this.stateVariableMap.get(stateVariableName) == null)
			throw new UndefinedStateVariableException(stateVariableName);
				
		if(((OrderStateAction)((HashMap)this.stateVariableMap.get(stateVariableName)).get(transitionName)) == null)
			throw new NoTransitionFoundException(transitionName);
		
		((OrderStateAction)((HashMap)this.stateVariableMap.get(stateVariableName)).get(transitionName)).invoke();
	}
	
	public void newOrder()
	{
		System.out.println(this.name + " is now a new order");
	}
	
	public void shipped()
	{
		System.out.println(this.name + " is now a shipped order");
	}
	
	public void pending()
	{
		System.out.println(this.name + " is now a pending order");
	}
}

Finally, I decided to see if anyone else had done something like this before, and wouldn’t you know it, I find this post from a couple years ago. Comparing the two, the implementation I am offering here uses a couple less classes by making transitions Strings and by not wrapping the states in a StateMachine class.

Home-made Stress Testing

Posted in The Project with tags , , , , on July 12, 2008 by moffdub

Software engineering nowadays necessitates the use of tools. From IDEs to the xUnit family of unit testing to aspect-oriented frameworks to performance testing tools, you will be hard-pressed to find a decent development shop that does not use tools at every opportunity…except on The Project.

Yes, you see, I am still waiting for The Company to send me my check for my last week of work. Meanwhile, I have already been paid for my first week of work at my new job.

So, as you can imagine, dumping a stack of purchase orders on my boss’s desk was not going to get me anywhere. This blog has chronicled the results of this frugality:

But to their credit, I was provided with four test machines (not behemoths, but regular-use workstations) for me to use as I saw fit.

What, then, is a poor chap to do when it comes time for performance testing?


(this is how easy it should be)

Simple; work from the ground up. What was the problem? I had to simulate 300 simultaneous users in 100 buildings spread across the American northeast hitting the same server and somehow measure the response time in each of our use cases.


(ready…click!)

My first crack at this was to strip the UI layer from the system and replace it with a console app. Hey, layers came in handy after all. After obtaining an operational profile, I essentially wrote a script in the console app that acted out this profile.

Then, another executable spawned off 70 copies of this console app. Run that on my four test machines and 70*4=280, which was close enough for us.

In the script, I added code to time how long each operation of interest took. This proved to be inaccurate because of the overhead of console output and exception handling. I finally bit the bullet and added more fine-grained time measurement code within the code-under-test itself. Yes, this did spawn off a “test” branch version of some files, but the cost and inconvenience was minimal.

The time measurement code would dump the statistics into Excel sheets, and a quick VBA script tallied everything up and gave us an average response time for the system.

For a few sprints, this worked out fine, and I was meeting the goal set forth in the requirements document. But this model was fundamentally flawed, and it soon showed when more and more operations were added to the profile. The measured response time skyrocketed, and the problem was not in the new code.

The problem: in the real operating environment, each user would be on their own machine. That means each one would have its own CPU, memory, hard drive, and network connection. The effect of 70 processes executing identical code at the same time and sharing these four resources was warping the stress test. And the hardware wasn’t built for large-scale processing. These were regular plain workstations.

The new problem is now how to distribute the remaining 69 processes in a such a way to not distort the measurements. And gaining access to a test machine in each of the 100 buildings was not going to happen, nor would it be manageable.

The big flash of insight came when I theorized about the layout of The Company’s WAN. A logical person would give each of the 100 buildings its own router, and traffic in building A would not be routed through building B, thus not affecting building B’s router. The traffic would build up on the server side only.


(WAN, LAN…oh man…)

After confirming this theory, I came up with my final home-made stress testing method:

  • Keep the console script and keep spawning it 50 times, but only once. There were, incidentally, around 50 users in my home building. This would simulate 50 users hitting the server out of our building. I dumped the timing code and used the working copy of the system.
  • For the remaining 210 users, simulate this traffic on the server in order to tie of server resources. This is the tricky part. One of the constraints of The Project was that I had no privilege whatsoever to install any executable on our server. I had a SQL Server, and that was it.

    So I had to bite the bullet and write a script in Transact-SQL. Not only that, but the operational profile had to be translated into stored-procedure calls. It was ugly, but there was no other choice.

  • Finally, modify the UI to display measured response time on the UI only. This consisted of simpler timing code and a text box on each screen to display the elapsed time between a click and the conclusion of the operation. These values were used to calculate the response time.

To put it all in action, fire up the server-side script, the local 70-process script, and a copy of the system on my own machine. Run through the use cases and record the response times displayed on teh screen.

This new method successfully modeled the intended operating environment. It also uncovered some silly performance bugs, like queries that used the LIKE operator to match on a primary key. Hey, I never claimed to be a super uber guru, and I probably never will. It also uncovered the performance issue associated with sorting colored DataGrid rows.

It can be done, and it can be done cheaply. But, to be honest, the scalability of a solution like this would be directly related to how messy the server-side T-SQL script became. And I have a feeling that this is limited, at best.

WinForms DataGrid Pagination in .NET 1.1 (with highlighting)

Posted in .NET with tags , , , , , , , on June 25, 2008 by moffdub

(download at the end of this post)

Problem

Believe it or not, there are still poor souls out there stuck with antiquated equipment. In the previous post on highlighting DataGrid rows, I recounted a situation in which I found myself needing to implement myself a feature in .NET 1.1 that is easy in .NET 2.0 or above.

To summarize, instead of handling an event, I had to go down a much pricklier path:

  • Via polymorphism and event-handling dexterity, enable arbitrary highlighting of rows in a DataGrid.
  • Force the colored rows to behave when sorting on any column.
  • Mourn the fact that for relatively large DataSets, the re-coloring of rows after a sort is a performance problem, the solution of which is to implement paging.

To the best of my knowledge, paging a System.Windows.Forms DataGrid is not supported in any version of .NET.

Solution

When I started thinking about this post on Monday, I was just going to outline possible solutions:

  • Out-of-memory paging: hit your data source as the user pages through data, only loading pages that need to be loaded for viewing
  • Out-of-memory paging with cache: same as above except load the current page, previous page, and next page
  • In-memory paging: load entire DataSet and page in memory

Upon first encountering the problem while on The Project, my first instinct was either of the first two options. While considering the solution on Monday, this did not change.

But I have since become much more wise and lazy since The Project ended, and honestly, I don’t like writing so much code, because bugs will surely follow. The first two solutions are not ideal because they involve knowing what your data source is and most likely changing to accommodate extra parameters to indicate how many records to fetch for a given query.

In The Project’s case, SQL Server inexplicably does not allow you to use a stored procedure in a FROM clause. So to do any out-of-memory paging would require changes to the s-procs.

Then, to maintain proper layering, report services and repositories would have to change for the same reason. Add caching to the mix and this is a minor headache.

Finally, out-of-memory paging would be good for performance problems related to the size of the DataSet. No such problem existed, so tackling it at this point would be over-engineering.

So, I settled on in-memory paging. At first I was only going to outline the solution with UML and a half-hearted code example. As usual, I wrote the example first. Then I looked deep into its eyes. And just like that episode of Seinfeld where Jerry shaves his chest:

Jerry: I did something stupid.

Kramer: What did you do?

Jerry: Well I was shaving. And I noticed an asymmetry in my chest hair and I was trying to even it out. Next thing I knew, (high pitched voice) Gone.

one thing led to another, half a day of coding and debugging led to two days, and at the end of this post you can download a WinForms DataGrid Pagination demo.


(red is the highlighting color here)

Since you can download the VS project in all its glory, I will only list the PaginatedDataGrid interface:

namespace System.Windows.Forms
{
public interface PaginatedDataGrid
{
uint pageSize
{
set;
}

void moveToNextPage();
void moveToPrevPage();
void moveToFirstPage();
void moveToLastPage();
bool canMoveBack();
bool canMoveForward();
String getCurrentPaginationInfo();
void sort(int col, bool descending);
}
}

This is supposed to be self-explanatory. The only thing that would need to be explained is that the PaginatedDataGrid is a wrapper around an existing DataGrid, and that DataGrid must use a DataSet as its DataSource. Hence, the download contains only one implementation of this interface. If I ever wanted this code up on CodeProject, I’d implement the remaining five.

PaginatedDataGrid is a bit like the Decorator pattern in that it can be turned on and off at run-time, save the exact inheritance structure of the actual pattern.

Also, you’ll notice this example in C# instead of the usual VB. I have started re-implementing the core of The Project in my spare time so I can fix all of the ridiculous design mistakes I made, practice TDD, and learn C#.

Usage

I wrote this example in .NET 3.5. It is ready to run, except you need to provide your own DataSet, however you wish to do so. I did it through SQL Server and I left the boilerplate code for that route in the demo if you choose to do so.

Checking the “Pagination on” checkbox will turn the feature on and off so you can see the performance problem I was facing. You can also choose your own page size by entering a valid positive integer (I didn’t sanity-check this field so be nice) and tabbing off the textbox.

Comments

Here is a sampling of the interesting bugs I had to destroy:

  • When moving from page to page, sometimes page P’s first row, if to be highlighted, wouldn’t be highlighted until you either moved to page P+1 or highlighted the rows twice.
  • Getting a truly “deep copy” of the rows of the initial DataSet proved to be more of a challenge than it should’ve been.
  • Adding those rows back to the DataTable of the DataSet had to be accomplished with a new object array because you can’t create a new DataRow and can’t add the existing one because of an ArgumentException that I didn’t feel like dealing with.
  • Sorting on a page would only sort that page, when it should sort the entire DataSet and then reload the page you’re on for that DataSet.

Download

ZIP file can be downloaded here. Any feedback is much appreciated.

Highlighting DataGrid Rows

Posted in The Project with tags , , , , , on June 22, 2008 by moffdub

I have a story about one instance in which The Project‘s management’s refusal to upgrade from an old version of .NET cost them valuable developer time on a feature that is trivialized in subsequent versions.

When showing all of the equipment of a certain type, there were certain statuses, like “In use” or “Discarded”, that were not to contribute to the equipment count of a room. And in order to keep the counts consistent with the full listing, it would make sense not to show such equipment in the listings. But then you had no easy way to change something from “In use” back to “Spare”.

The solution was to show them anyway in the listing, but highlight their rows to set them apart from the rest of the equipment. Since the natural Windows Forms control for this task is a DataGrid, I set out on figuring out how to highlight specific rows of a DataGrid in .NET 1.1.

The version number is key. Being restricted to .NET 1.1, I was unable to use a DataGridView, which is a customizable DataGrid. This cut me off from handling the CellFormatting event of DataGridView and implementing a rather painless solution to the problem.

After a long search, I found an article that describes the solution. Since the article is fairly detailed and contains lots of other examples, I will summarize how to color a specific row of a .NET 1.1 DataGrid (refer to the article itself for code examples):

  • Define a new event with a new data type that will keep track of a cell’s row, column, background color, and font color. Make sure the new data type is passed by reference.
  • Handle this event in the form that contains your DataGrid. Do not use the handles keyword. This way, you can re-use the handler and you aren’t tied to a specific number of columns.

    In this handler, you provide the logic that determines which rows are colored. For us, we assume we have this info in a Hashtable somewhere. So we compare if the event’s row shows up in our Hashtable and is not selected. If so, set the background and font color properties of the event.

  • Inherit from DataGridTextBoxColumn and override the seven-argument Paint method. This method will get called every time a cell is drawn on the screen.
  • Now you fire the event you earlier defined, providing it with the current row and column, which you can get from inside the overridden Paint() and from Me.DataGridTableStyle.GridColumnStyles.IndexOf(Me).
  • Finally, go back to your form and replace instances of DataGridTextBoxColumn with your derived column. Use the AddHandler statement to add the event handler to handle the derived column’s newly-defined event.

    The event is handled in the form and its argument gets altered. After the RaiseEvent statement, you can inspect the argument to see if it was altered and set the appropriate brush color to pass to MyBase.Paint().

All of that to highlight a lousy row.

Problems

Well, fine, it might be an annoying exercise in polymorphism and event handling for me, but it did get the job done.

But there was a problem. The columns of this DataGrid had to be sortable. As it stood after the above steps, if you clicked on a column, the grid would sort based on that column, but there wouldn’t be a re-painting of the rows, so the wrong rows would be highlighted after a sort.

Solution:

  • Keep track of a need-to-repaint flag.
  • Handle the DataGrid’s Paint event. Check the flag and re-paint if the flag is true. By “re-paint”, I mean iterate through each row, examine the correct column(s) value(s) that determine if a row needs to be highlighted, and add that row number to the Hashtable. Be sure to set the flag back to false or you’ll be painting forever.
  • Handle the DataGrid’s OnClick event, not its OnMouseDown event. Perform a HitTest and see if the click happened on a column heading. If yes, set need-to-repaint to true.

Why not handle OnMouseDown? I was never fully sure if I correctly understood this MouseClick article:

Depressing a mouse button when the cursor is over a control typically raises the following series of events from the control:

1. MouseDown event.
2. Click event.
3. MouseClick event.
4. MouseUp event.

and this MouseDown article:

Mouse events occur in the following order:

1. MouseEnter
2. MouseMove
3. MouseHover / MouseDown / MouseWheel
4. MouseUp
5. MouseLeave

but I think it is because MouseDown occurs before MouseClick, so in MouseDown, let the re-sort happen with no re-paint. Then, in MouseClick, do a re-paint; this avoids re-painting too early.

OK, that is solved, but now whenever I have a substantial amount of data in a grid, there is a “flickering” effect whenever the grid is loaded or a column is sorted on.

Solution: in DataGrid’s Paint handler, hide the grid before doing the re-paint and then re-show it once finished.

OK, that is solved, but now, whenever a very substantial amount of data is in a grid, sorting by a column introduces a considerable multi-second delay before the grid is visible again.

Solution: paginate the data. Problem: we are in Windows Forms, not ASP.NET.

So we’ve followed this to its logical conclusion: now we’ll have to implement a custom paging solution in order to side-step this performance problem. I never actually got around to implementing this feature, and possible solutions are the topic of a separate post.

So there you have it. Refusal to upgrade to a newer version of .NET, even though the platform itself is rather high-level, cost us substantial development and test time. Don’t underestimate the value of upgrades.

The Getter Setter Debate

Posted in Design Issues with tags , , , , , , , , on June 16, 2008 by moffdub

I’ll be honest. I never thought there was a debate as to whether getters and setters were good practice or not. It never occurred to me, directly at least, that I should avoid them. Then a couple of days ago, I was browsing Reddit and came across this article by Michael Feathers; of interest is this excerpt:

“John Nolan, gave his developers a challenge: write OO code with no getters. Whenever possible, tell another object to do something rather than ask. In the process of doing this, they noticed that their code became supple and easy to change. They also noticed that the fake objects that they were writing were highly repetitive, so they came up with the idea of a mocking framework that would allow them to set expectations on objects – mock objects.”

Then somehow I run across another article that happened to link to the Feathers post:

“Suppose that we want to print a value that some object can provide. Rather than writing something like statement.append(account.getTransactions()) instead we would write something more like account.appendTransactionTo(statement) We can test this easily by passing in a mocked statement that expects to have a call like append(transaction) made. Code written this way does turn out to be more flexible, easier to maintain and also, I submit, easier to read and understand. (Partly because) This style lends itself well to the use of Intention Revealing Names.”

Ah, this blog entry has a code example, something I respect. Now I do agree with the latter point on Intention Revealing Names; this is a key Evanism from DDD, named Intention Revealing Interfaces.

However, this example seems to imply a code smell. Yes, the account object is hiding how it is managing its transactions to the consumer, but now appendTransactionTo() is altering its argument. This is something I try to avoid.

Then again, this smell could be avoided by returning a new statement object with the transactions appended. This approach seems slightly contrived and possibly inefficient in an efficiency-sensitive environment.

Another way I can think of:


statement.appendTransactionsFrom(account);

This gets you the intention-revealing part, but you lose on killing getters because appendTransactionsFrom() has to get at the transactions of account somehow.

Noting now that this getter-setter topic is not a rogue argument, I did a search and came across this JavaWorld article, excerpt following:


double orderTotal;
Money amount = ...;
//...
orderTotal += amount.getValue(); // orderTotal must be in dollars

The problem with this approach is that the foregoing code makes a big assumption about how the Money class is implemented (that the “value” is stored in a double). Code that makes implementation assumptions breaks when the implementation changes. If, for example, you need to internationalize your application to support currencies other than dollars, then getValue() returns nothing meaningful.

The business-logic-level solution to this problem is to do the work in the object that has the information required to do the work. Instead of extracting the “value” to perform some external operation on it, you should have the Money class do all the money-related operations, including currency conversion. A properly structured object would handle the total like this:

Money total = ...;
Money amount = ...;
total.increaseBy( amount );

OK nice – here is an example that does not alter its argument. However, it begs the question: how does increaseBy() operate without a getter for the amount of money in amount, assuming all member variables are private?

Maybe we can get away with this by defining a MoneyInterface that does not use getters and setters, only intention-revealing methods. Then, implement the MoneyInterface with the necessary getters and setters. Then in increaseBy(), attempt a downcast in order to get at those methods and perform the increase.

OK, I know, the huge downside here is the downcast. There is also the abuse of the interface construct.

Following the spirit of the Feathers post, shouldn’t we ask the amount object, the one on which we used a getter, to perform the action, like this:

double orderTotal = ...;
amount.increaseBy(orderTotal);

Granted, this does not take a Money object, but here you side-step both the getter-is-evil idiom and my objection above.

Another question I have for this crowd: is this getter-is-evil dance even valid at layer boundaries? I think back to The Project and I had code like this:

Public Function storeNewPC(ByRef pcToStore As PC) As Integer

	Dim qryStr As String = ""

	qryStr = qryStr & "sp_STORE_PC '" & pcToStore.getMake().getID() & "', '"
	qryStr = qryStr & pcToStore.getModel().getID() & "', '"
	qryStr = qryStr & pcToStore.getCores() & "', '"
	...

	Return db.exec(qryStr)

End Function

Following this idiom, I shouldn’t be asking the pcToStore for all of its guts; I should instead tell it to generate the query string I need:

Public Function storeNewPC(ByRef pcToStore As PC) As Integer

	Dim qryStr As String = pcToStore.generateStoreStr()

	Return db.exec(qryStr)

End Function

Please excuse me as I vomit. Big no-no.

Possible side-steps:

  • use reflection. However, as I alluded to in the Nilsson book review, this sort of trickery is both a little too complex and also kind of weak; if all you’re doing is using it to access private members, then you are already violating the spirit of the getter-is-evil argument.
  • use the Friend keyword and keep repositories and domain objects in the same assembly. This works, but it carries with it the same “spirit” problem as above, and it is a .NET-specific approach.
  • eliminate getters and provide methods that return an agreed-to data structure whenever info like this is needed…an IList, an array, a struct…whatever. This way, you’re not tied to internal data types. On the other hand, this approach is somewhat klunky.
  • most industrial solutions will be using a third-party tool for this stuff anyway, so…use a third-party tool like NHibernate (which, by the way, uses reflection).

It could very well be the case that layer boundaries are special cases. Another JavaWorld article states that procedural boundaries (like my layer boundary; after all, the examples provided in these articles stay within the business logic (domain) layer) and getters that return interfaces are exceptions to the rule.

In the case of the latter, I have to wonder if I really need all of that bloat if I simply want to return an integer from an object. Do I really need to define an interface for some sort of holder, implement the interface, and then return the implementation? Even so, if I do that, won’t that interface have its own getter method to return the integer?

What’s even worse, all throughout the Nilsson book, .NET properties were used. And I’m now thinking of doing a re-make of The Project in C#, and I know I’ll come across this issue — now that I know it is an issue to start with. And I don’t know, but I think I had another thing to say about the statement/account example that I came up with when I was tossing in my sleep at 2 AM last night.

This is making my head hurt. You tell me. Am I thinking too hard?

Follow

Get every new post delivered to your Inbox.