Thursday, 31 July 2008

Email Best Practices

Email is very commonly used in the workplace for a variety of reasons but in some cases is totally unmanageable with people having several thousand emails in their various folders, most of which is not read or certainly not digested giving cause for missing important information and affecting the proper process flow of the business. Hopefully your company spends 90% of its time doing what it does in a normal way and then 10% of its time dealing with problems that naturally arise when things break, aren't delivered, are affected by individual lack of performance etc. Of course, in your company it might be more than 10% but if this amount is too great then your company is simply wasting money. Email and lack of best pratice can be a major player in this market. What follows are some straight-forward tips on email.

  1. Give everyone in your company email training. In my experience, more people than you might think don't know how to do the basics like bcc and expiry dates let alone more advanced features that will make their email experience a slave rather than a master.

  2. Ask whether sending an email is the best way to communicate. I have been part of many email discussions that take much more time to type than if I simply picked up the phone and spoke to someone. You can still record the fact that you had the conversation if you need to into a CRM tool or even a word document.

  3. If you do need to send it, ask if all the recipients need to read it - avoid overusing mailing groups when perhaps only a subset of the people need the email. Adding people as recipient when the email is for their information only is fine unless they are a more senior manager or person who receives much mail when you should really ask if they need to see the information. Once they are cc'd they might receive lots of replies which they don't need to see.

  4. If you are receiving lots of emails that are not relevant, do not be afraid to ask the sender to not send them to you. Tell them you are trying to reduce your inbox.

  5. Don't be lazy with subject lines. Carefully thought out subjects means people can see exactly what you want to talk about and can choose to ignore something that is low priority until they want to look at it. Subjects like "Question" should not be used whereas something like "How do I return an item to stores" will allow the recipient to know that it is something to be answered straight away. This is especially true when you are adding cc people to the email

  6. If you are sending out mass emails externally to your company, put everyones email address in the bcc field so that each recipient cannot see all other email addresses. You have a duty (sometimes legally) to protect email addresses from prying eyes.

  7. Do not leave emails in your inbox. People can easily miss new emails because they fall amongst the others. Read the emails and then either delete and action the item or file it - perhaps in a todo folder. This way you will not miss emails and they are easier to delete if unrequired because everything gets moved or deleted.

  8. If you need clarification on an email, ask yourself whether a phone call would be more efficient and then you can delete the email and know exactly what you need to do

  9. Make good use of tasks to know what you need to do rather than keeping emails for the same task. It makes things neater and you can easily copy and paste text from the email into the task description. You get the extra benefit of priorities and scheduling.

  10. If you have a problem with SPAM and junk mail, use junk filters or change mailbox names every once in a while, people who need to contact you will be able to get your new email address easily enough if they really need it.

  11. Make use of inbox rules to automatically move regular emails into a folder where you can then choose to read them, keep them or delete them.

  12. Remember that a lot of company confidential information is kept in emails so make sure you lock your pc when you leave it and regularly delete unwanted emails and sent items (perhaps after 1 year) which will reduce the potential impact of people reading your emails.

  13. Have a company email policy and treat the subject seriously. Trying to be informal is fine in theory but we are talking about wasted time in your company and it should be taken seriously. You then have specific comeback on somebody who perhaps continues to pollute peoples inboxes with junk mail or generally increases other people's workload by not using best practices.

Monday, 21 July 2008

Software Development Coming of Age

There was a time when computer science was the preserve of academics and big business for the simple reason that computers were expensive and their per-hour cost was high. You wouldn't have an A-Level student being let loose on a system for hours trying to hack out a project for school.
Time have changed however and computers are extremely cheap. Even in countries where the average income is low, many people have at least limited access to a PC which can be used for amongst other things programming.
Like most things, this is both a blessing and a curse. It is a blessing because people who might be very skilled programmers have access to something that they wouldn't have 20 years ago and these people are part of the skill set that businesses use to produce productive software (we hope). However, the curse is that the skill set is polluted with thousands of people with very limited skill and although not necessarily bad in itself, a percentage of these people seem very free with their advice to others who are struggling with something and propose solutions that might be, well rubbish. There is no easy way to work out the value of this advice because Programming is often considered like mathematics where if the solution works, it must be correct. A better analogy would be car mechanics where just because an engine fits and turns over doesn't mean it is the best type of engine or the best way to connect it up - although it might well work. Programming is often a set of balances where speed is offset against readability or where pure theory can be the enemy of pragmatism and just getting something that is 'good enough' rather than 'perfect' in a reasonable timescale. The skill of the programmer is not whether they always produce the fastest code but whether what they produce is appropriate to any given requirement.
Well this can be one problem but it can get worse: people write rubbish sometimes because of poor advice or lack of training but also people often re-invent the wheel. How many people must have written a 3-tier database web application which is 80% the same as every other one in the world? Why can't we share what we have done to more quickly move into the future? Well we sort of share but we can face similar problems to above, we get given eaxmple code by somebody when it might be varying levels of rubbish or we might take some existing code and by not understanding it, we might either modify it and make it rubbish when it was OK before we changed it or otherwise we might apply it to a system where it is not appropriate. For example, a non-secure database application might be fine for a corporate network where hacking is seen as unlikely but it would be inappropriate for a public network where hacking is commonplace. This is compounded by the seemingly high number of people on forums who seem to have little or no programming knowledge asking things like, "how do I generate 3D graphics" or, "how can I write a flight simulator" - can we trust these people to write robust software?
So what do we do? I read a book not too long ago called "Emergent Design: The Evolutionary Nature of Professional Software Development" by Scott Bain and he was talking about more regulation for the profession that is called Software Engineering. A person cannot merely decide that he wants to be a doctor or lawyer and start practising. Even if he is poor or seemingly able, he must attend various courses and take exams to prove his competence. Even for mundane things like driving a car, people have to be a certain age and have to pass a driving test. Why? Because these things carry responsibility. Driving or being a doctor without proven skills is dangerous. Being a lawyer without skill can end up causing somebody to be prosecuted without good reason or cause somebody who is guilty to be released into society when they should be locked up. What about Software Development? Well poor software is often blamed by companies for various corporate problems and who is in a position to deny it? We have all experience poorly written software so we almost expect things to be less than perfect. These bugs can cost us time and money as well as frustration. Although the year 2000 'bug' was not really a bug in one sense, it cost companies millions in proactive and reactive costs over the new millenium eve in case their systems crashed. While we have a totally unregulated industry, we are all in danger. So, imagine we had a regulated system where somebody has to have a certain level of qualification before they can call themselves a "developer" or "software engineer" this would help to solve the general quality of systems being developed - or at least improve them over time since currently many people who teach computer science won't necessairly have a qualification themselves or otherwise they learnt their trade a generation ago when priorities were different.
In order to solve the second problem, i.e. people re-inventing the wheel, I think if the industry became regulated then the industry body could support a single 'red book' which would describe all of the best practices in software where they exist with any caveats to the design that might be appropriate. It would not be a copy and paste because we don't want people to copy-and-paste from one context into another - that causes bugs. What we do want is for a single defining place to say, "if you are designing a new database, you must consider 1) security of database access (link to sub page) 2) Layout of tables and links (link to sub page) etc". A sub page might say "you must implement a security model for stored procedures if you a) have a publically visible server, b) have an application that behaves differently for different users..... but if you secure the procedures you will a) incur additional development time, b) you must produce a comprehensive test case to ensure you have secured them (or create a process that means they will definitely be secured as they are created)...etc"
Hopefully you get the idea. It will never be able to be totally definitive for a specific scenario because the context of software always differs but at least if there is a 1-stop shop for information, people will not forget something and will be able to see the pros and cons of every decision before they make it. Of course, good practice might change with time so the system would need a way of updating users so that they know this has happened but what we end up with is a way of sharing knowledge from bona-fide engineers who know what they are talking about but in a way that does not encourage copy-and-paste with all of its pitfalls.
I suspect such a system exists in various companies and probably a lot of the content is the same but rather than trawling google to find something of dubious value, if this content was all in one place with known reliability then we can all move onwards and upwards.

Thursday, 17 July 2008

Writing good logic in code

This article does not relate to a specific language, but to many languages although I am only familiar with about 10 so you will have to decide whether the comments are appropriate for yours or not.

There are two major points to cover and the reason for needing good logic is simple. I'm not sure there are many statistics but I bet the majority of software bugs are related to broken or incomplete logic. The first point is that we need to reduce the amount of logic required in the first place and the second is that we need to simplify and rework any logic that is required in order to make it readable, maintainable and testable.

How can we reduce logic? The first and obvious point is that we need to learn it properly. I saw some code the other day that said something like:

if ( myString.Length == 2 && myString.SubString(0,2).ToUpper() == "SC" )...


and it made me chuckle but this is typical of the first point. Can you see what is wrong with it? If you are checking for the string SC appearing in the first two characters of a string and then checking its length is equal to 2, surely the string EQUALS "sc" in other words, the following would be equivalent:

if ( myString.ToUpper() == "SC" )...


Which is clearer of the two? The second is by far clearer and has many less potential defects. The first example could have an accidental = instead of ==, it could have different string references for the first and second checks and it could get the substring values wrong. All of these on top of the basic potential defects found in both examples. Now imagine if we counted the potential defects in a piece of software before and after rationalising poor logic like this and you might have a 50% reduction in defect risk - nice! There are plenty of examples of places where logic is confusing, how many people have used horrific "if, then, else if, if, then end if.." as if that is perfectly acceptable. Ask your bosses to enforce a simple logic policy that something is simple or it is re-factored.
The second way to reduce logic is to use polymorphism and inheritance to create structural logic. You need to understand the difference between structural logic and behavioural logic because one should always be implemented in polymorhism and the other might use polymorphism. A car does not need to control whether the drive-shafts turn the wheels because for a given car, they always will. That is not to say that all drive-shafts turn wheels but for a given car, once it is built, that is how it works. On the other hand, the gear to use in the car is not fixed but is dependent on how the driver is driving it. It requires behavioural control and can change frequently, the drive shafts are fixed and have logic or behaviour dependent on the structure. How do we equate this to software? Suppose we have a simple application that has 2 dialogs. One for normal users and one for administrators, they have largely the same properties and perhaps one has an additional option on it. When we want to query the properties set by the user in the dialog, we could say:

if ( Dialog.Name == "Admin" )
{
}
else
{
}


or we could think we are being slightly cleverer by using the pointers/references:

if ( AdminDialog != null )
{
}
else
{
}


Sure, it's not the end of the world but it already has a built-in assumption in the first example that the dialog has a name that is not going to change and in the first and second one that there are only two dialogs. If you add another type in both of these examples, the code will still compile. This is a structual situation since once the dialog type is defined, presumably it stays set until the application is 'rebuilt' or restarted for another user. In this case, we could create a base class or interface for our dialogs, keep a pointer to the base class and lose the logic:

MyDialog.DoSomething(); // Will call whatever is currently attached


Ah you say, but I do things differently for each dialog and need the logic brackets to separate it. Most of the time you do not. At the level you are handling this, you probably do not need to know about the detail but if you do, ask the dialog to do it or ask the dialog for a handler class which can be specialised for each type so that the caller still does not need to know what type of dialog is displayed. If you create a new type, you will need to implement the base class interface or dialog so once the compiler is happy, you will have precisely 0 strutural defects with your code!
Behavioural logic is related to things that change all the time, you might receive a network message and decide what to do based on the message number:

if ( Message.Number == 1 )
{
// Handle message 1
}
else...


Remember that although this logic might be required you can still use design patterns to handle things in a way that does not create large and convoluted logic that is hard to maintain. You could argue that for simple cases the logic is OK but my experience is that there is no simple case - what if you accidentally mistype the value you are checking or put a value in twice, do you really test every single message to make sure it works?

The second area that is important is simplifying logic in order to make it usable. This follows on from the behavioural discussion since the structural logic should be hidden away in polymorhic function calls.

How do we simplify logic? Again, we need to learn how to re-factor logic and we can do things like logic reversal, so instead of:

if ( VariousConditions )
{
Control1.Enable = true;
}
else if ( SomethingElse )
{
Control1.Enable = true;
Control2.Enable = true;
}
else if ( AnotherThing )
{
// You get the idea
}


which is very common in software, you can reverse the logic and have the individual items dictate what their logic needs to be:

Control1.Enable = VariousConditions || SomethingElse;
Control2.Enable = SomethingElse || AnotherThing;


Can you see how much neater that is? Consider this as a possible refactoring tool whenever you see terse logic statements.

Another technique is very simple but often underused. You create a function with a helpful name and you move the logic into that function or more commonly functions. I can only think it is laziness that prevents us creating these helper functions that can turn otherwise impossibly complex logic statements into a collection of helpful and very readible statements:

if ( MessageIsTaggedAsUrgent(Message) || MessageIsFirstInQueue(Message) )
{
DealWithIt();
}


The logic that defines each of the two unrelated functions will make more sense in separate functions than lumped together in a single statement. Also if one type of message is not being handled, a single easily testable function can be examined.

The last suggestion for majorly helping after everything else is done is to use automated testing tools (many great free ones exist such as csunit - although not sure why we don't like paying for things!). I am constantly amazed about how often I am caught out by a very innocuous defect in an otherwise simple function. A function to add items to a list? How hard could that be but let us consider what might fail even in a single function. 1) The list could be null/unitialised, 2) The item might already exist in the list which might not be allowed 3) The list could be full 4) The item might not be the right type for the list (not always easy to trap with the compiler for un-typed collections) 5) You might be trying to insert it at an invalid position 6) The object you are trying to add might be null and this might not be allowed. You get the idea - there are often more considerations than we can think of so how to do we cope? We don't. We have peer code reviews, we get trained, we get experience, we use robust languages and frameworks and we test,test,test. How can we test the function? We can throw a whole load of automatic data at the function and find out what happens in certain realistic conditions or we specifically handle the exceptions or errors that might be thrown by an invalid condition (or both). We don't necessarily care about running out of memory after adding 4 gazillion items to a list but it might be an issue. We might instead care about what will happen with a full list or a null object, we might want to test the logic by setting external conditions and calling the function. If you can't easily test the function then break it down until you can. Remember that our functions often assume too much about the parameter data or member variables they use. These assumptions coupled with poor logic design is defect central!

Unfortuantely most logic issues are not found until after release because there are too many of them to test and many are very subtle or specific. By carefully approaching the design and build process, you will find a massive reduction in these!

Monday, 14 July 2008

Reporting Services, Margins, Page Layout etc

If you use Reporting Services, I bet you have spent a while trying to get the reports to print out as expected! I'm not sure why something that wastes so much of everybodies time (printing things out properly) is not well-known and fixed in software now - it should be impossible to get it wrong. The print knows its paper size, the software knows the paper size yet it prints out most of your document on one sheet and then a little strip on the next - obviously what you wanted!
Anyway, in reporting services there are some quirks that you need to know about in order to get your report correct.
1) Select Report - Report Properties and the Layout tab. There are paper sizes and margins in here. Note that the page sizes here need to match the physical paper size. I'm not sure what effect they have because it doesn't draws these on your report design!
2) You might think that is it BUT you then need to right-click on the grid area of the report in the layout view and choose properties (properties of the body of the report) and lo and behold there is another field here called "Size" which consists of width,height in measurement units (mm, inches etc) which defines the area of the report body. For some reason this is not restricted by the page size in report properties and you won't know if you make this too big! Anyway, this needs to equal the page size minus margins if you want it to fit on one sheet.

Example A4 paper is 210mm by 297mm so you set the report layout (in Report -> Report Properties) to these figures and then suppose you set the margins to 10mm all the way round. The report body should be set to 210 - 20 (for width) and 297 - 20 (for height) when in portrait or 297 - 20 (width) x 210 - 20 (height) when in landscape mode. It sounds really simple but it still takes time to find these things out!

Wednesday, 9 July 2008

Why Windows Vista is pointless

Vista is a strange beast, touted as the next big thing by Microsoft (MS) and much scorn from various people in the IT and business world. My own take is quite simple, Vista is an Operating System in which case it provides a 'desk' on which to run various applications or applets. Windows XP is also an operating system and also ran most things I need so why would I upgrade?
1) If it was free then I might upgrade just to get something that looks more modern but it is not free, it is pretty damn expensive in fact.
2) OK so it costs money, I therefore have to weigh up the cost/benefit ratio and this should help me decide whether the big dollars is worth it. As far as I can see, for most users anyway, particularly business users who prefer not to use bells and whistles, there is precious little value that it adds. One of the biggest selling points was a snazzier interface but this only came with the ridiculuous expensive and humerously named "Ultimate" edition so most people didn't benefit from this. Why would people want to pay massive bucks for a snazzier interface if that is all it is?
3) It is supposed to be more secure from hacking etc but we were convinced by MS that XP was secure so has something happened since Vista was released that has made XP less secure? If in fact Vista was basically unhackable whereas XP was certainly vulnerable in some areas (and perhaps there were many latent security problems waiting to be discovered) then this would be a reason to upgrade maybe, but it isn't and we still get patches for Vista, obviously the fundamental security model is still very lacking. Still no reason to upgrade.
4) People have complained about lack of driver support but this would always be the case with new operating systems and to be honest i think most people would bear with MS as the drivers are developed if this was the only issue with Vista. What MS seem to have forgotten is that most people in the world still use XP so hardware manufacturers are not in a rush to write Vista drivers!
5) OK, let us assume it was not so expensive for nothing much, then we would autmoatically upgrade our current (probably XP) OS to Vista? Not on your life as far as I am concerned. For some reason, in supposedly going back at least partly to the drawing board and coding Vista, we could have reasonably expected a load of bloat and slowness to be cut out, after all it does not need to support 16 windows applications (does it?) and there must be other stuff which is basically redundant. They could simplified lots of the Windows API stuff and generally made it AT LEAST as fast as XP by the time they added in some new bits but no sir-ee-bob, it is slower, noticeably slower except on the latest machines with 4 cores in their processor which can pretty much handle it. The problem is, all that hidden power being used to run the OPERATING SYSTEM is not available to run what needs to be running, i.e. the APPLICATIONS. I'd rather run XP on an older machine and still have loads of power to spare for my apps.
6) All of this performance hit would be forgiveable if to get ultimate security (which Vista is striving for presumably) will always cost loads of processor cycles and memory overhead etc but interestingly the latest incarnations of Linux are much more secure than Vista and run much faster too. Why? Because they have a good security model that does not require massive OS overhead to manage.
7) I've always wondered why Windows XP and Vista make generous random use of the hard disk when I have not noticed it once on Linux. Linux installs and removals are quick and painless, Windows ones can take hours! Despite Linux potentially being very flaky with all those grubby programmers having fingers in pies, it is, I am sad to say, scoring higher on my useability list than Windows! The only one thing that is lacking in Linux but getting better all the time is the number and quality of applications available. Office, internet and email are fine and these are 95% of what i use anyway. The development environments are not quite up to Visual Studio quality but they are very much useable despite this. There are even cool programs in Linux that you can't get (at least for free) on Windows including XTrkCad a model railway cad program.
8) Sorry MS, you have well and truly missed the point with Vista and you probably know it so stop telling us we desperately need Vista and go back to the drawing board!

Monday, 7 July 2008

The definition of the object is hidden

Strange compiler error today in VS2005 C# ASP.net web site. I had renamed a couple of old files and copied some back into the web directory from elsewhere and hit "Build". I got loads of errors in that fields in the csharp file I copied back into the solution were not defined even though they clearly were in the aspx file. I checkd all the usual spellings, @Page names etc and still nothing. When I right-clicked the field names and selected "go to definition", it went into the aspx page and then gave the suitably abstract error "The definition of the object is hidden" which is fine except it was very confusing (apparently it is talking about the actual code rather than the server control on the page).
Anyway to cut a long story long, it was because the build tool digs everything out of the web directories including my old definitions of the pages I was building. It then obviously linked the names to the old class definition and then complained that my new definitions didn't work. All I had to do was remove the backup files from the directory (or presumably could have renamed them to a different extension) and it was all fine.
How flipping annoying.

Friday, 4 July 2008

Request for permission SqlClientPermission failed

I am developing an XAML Windows Presentation Foundation (WPF) app which is designed to be browser only (an XBAP) and when trying to debug it using a shared database library, I got the above exception when it called SqlConnection.open even though it had an identical string to another working part of the system. I was concerned that it was related to user contexts and session and authentication and other nastys that hide under the covers of a web app but fortunately it was easier than that.
My XBAP application was marked as partial trust which means it is more likely to be loaded without hassle on a strangers browser if you posted an xbap on your company web site for example. This trust level however means amongst other things, you cannot open a connection to a database (fair enough I guess, you could be trying to hack someones pc). If however you mark it as Full Trust (under project properties -> security) it will only be loaded from a trusted location i.e. the local intranet usually unless a user specifically allows it or turns the down the security on their browser but it does mean that you can do more useful stuff like opening connections to databases.
I switched it to full trust since this is for a corporate network and it worked fine (well in fact it came up with an unrelated error but that is another story!).

Thursday, 3 July 2008

VisualTreeHelper.HitTest

VisualTreeHelper.HitTest is a cunning function available in Windows Presentation Foundation (WPF) classes that provides mouse hit testing for the given panel and point. It returns a result if the given point is inside a control in the given panel (grid, stack panel etc).
The basic form takes a panel and a point and returns a result, which if not null contains a VisualHit object which can be casted to whatever control was hit.
The reason I am telling you what you could find out on msdn is that when you pass a point and a panel into the function, it will assume the point is relative to the child coordinates of the panel and it will NOT take into account the fact that the panel is not necessarily located at 0,0 on its own parent panel. To ensure the point is passed in as child coordinates, inside your mouse function use the function:
MouseEventArgs.GetPosition()
and pass a reference to the panel on which you will be hit-testing rather than null or the parent panel which will give you the wrong child coordinates. For example:
<StackPanel MaxWidth="5000">
<Grid Name="JobsGrid" Canvas.Bottom="0" MaxWidth="5000">
</Grid>
<Grid Name="PeopleGrid" Canvas.Top="0" MaxWidth="5000">
</Grid>
</StackPanel>

protected void CanvasMouseMove(object sender, MouseEventArgs e)
{
HitTestResult Result = VisualTreeHelper.HitTest( JobsGrid, e.GetPosition(JobsGrid));
// etc
}

Wednesday, 2 July 2008

static or non-static?

Are static functions good or bad? Should data ever be static? Well firstly we need to say that we are talking about OO design and not structural code. Static in C++ sometimes meant that something had file visibility and sometimes that a single function existed requiring no instance of the parent class to call it.
So good or bad? We should be pragmatic about this. It is rarely correct to say that there is "never any reason to use static" or vice-versa so let us state what we know about the pros and cons of static methods:
Pros:
1) Quick and dirty (bad reason)
2) Allows a consumer class to obtain a reference to something without knowing its concrete type (e.g. encapsulated constructor) means that the consumer calls something like
IMyInterface inf = HelperClass.CreateObject();
rather than
IMyInterface inf = new ConcreteClass();

3) In the case of certain scenarios such as database helper functions, it might seem neater and clearer to have
ClassName.StaticFunction(whatever);
rather than
new ClassName().NonStaticFunction(whatever);
or
ClassName cn = new ClassName();
cn.NonStaticFunction(whatever);

Cons:
1) No polymorphism of function, i.e. a subclass cannot override the static method, it can only hide it.
2) If it is a non-constructor then it ties the consumer to the type of class providing the function. If you want to change the provider, you have to modify the consumer.
3) Static functions are not implicitly thread safe because all threads would access the same local variables. Instance function local variables are created on the thread stack and are isolated from each other.
4) Static functions can gloss over a badly levelled system. If you have a class called
Employee
that has a static function
Employee[] GetAllEmployees();
then levelled correctly, you should have another class called, i.e. Company and this new class would have a non-static method called GetAllEmployees() which would return an array of type Employee. The levelling has removed the need for the static function.

My own opinion is that I start with instance functions and only use static ones where the instance ones do not suit the situation for some reason. Up until this point, I have only ever used static functions to match existing code or to call static functions in libraries that I did not write although I have been convinced recently that the encapsulated constructor using a static method is a good idea (not to be confused with Singleton pattern which is similar but only permits a single instance).