Tuesday, 27 March 2012

Error 1603 in wxs installer

This is a common error and means lots of different things if you Google it. In my case, I looked into the wxs install log and found the following:

WriteIIS7ConfigChanges:  Error 0x800700b7: Failed to add appPool element
WriteIIS7ConfigChanges:  Error 0x800700b7: Failed to configure IIS appPool.
WriteIIS7ConfigChanges:  Error 0x800700b7: WriteIIS7ConfigChanges Failed.
Which was a bit strange since I hadn't changed anything in the particular service being installed and the deployment was destructive, removing everything installed first before installing about 50 programs.
Well, in my case, the build wasn't completely destructive! What had happened was that I had installed a couple of related programs manually and used a name for the App Pool, one which the installer was trying to use. Because it existed, it didn't simply join the existing one, it returned an error.
To fix, I simply created a new app pool for the existing services and moved them over. I then deleted the old app pool and ran the installer again.
Lovely.

Tuesday, 20 March 2012

Invalid object name 'master.dbo.spt_values'

You might have found this unusual error when trying to right-click a database in SQL Server Management Studio (to get properties etc). It is caused by tables that have been deleted from the master database, which can be caused by various inerrant processes!
Anyway, to fix it, run u_tables.sql from the install directory of your SQL server instance and this will recreate most of the tables you need and get you back up and running.

Thursday, 15 March 2012

Detail and Caution

Many times in my life, I have been guilty of not taking enough time to ensure I understand what someone is communicating and end up mis-understanding them. We could say this is life but in the commercial world, some mistakes are simply not worth making because they can involve time, money, reputation and legal issues.
One such email from someone at work asked what would happen if a certain program failed because of a permission problem. My assumption was that since this program wrote to disk, the man was talking about a file permission and I responded accordingly, pointing out a potential glaring omission in functionality. Fortunately, someone else who read the email was aware of a problem which actually related to a permission calling a service and which, when I looked into it, was much more mundane and didn't cause any problems other than the program crashing.
I guess "measure twice/cut once" should apply to software as much as the building trades!

Tuesday, 13 March 2012

Oh Dear More Hacks

Another article today at http://www.theregister.co.uk/2012/03/12/smut_site_hacked/ which talks about a site that was hacked. It is understandable that web technologies are weak for security and no-one would blame somebody for having their DNS poisoned or some root kit passed in a web request but this is another case of the basics being so wrong.
We are not talking about a site knocked up by a kid as a labour of love and therefore lacking the attention to security that might cause a breach, we are talking about Manwin, a European commercial organisation who have committed the heinous crime, not of having security weaknesses necessarily but of storing unencrypted credit card details and unencrypted user login details, both of which have potential value.
In this case, we know who hacked the site and it is unlikely to involve any fraud but why is it not illegal to do this? Why can a company not be massively fined or people even jailed for having suck a lacklustre attitude to personal information?
Currently in the UK, most of the time, you have to be prolific or someone with big pockets for the Information Commissioner to get involved, but this should not be the case. The law is clear about data protection and storing unencrypted credit cards should in itself be a specific crime so that the first time you are hacked, you can be jailed rather than always being given the benefit of the doubt.
I think our government are still very green when it comes to IT and Internet issues and we have people from an older generation trying to legislate about things they simply don't understand.
The issue is worse when we consider that a lot of sites are hosted elsewhere and come under different laws but again, we could have an internet standard which a site can display which proves they are compliant with certain security protocols (like OWASP) and which would have to be done in a way that cannot be easily faked such as a UK certificate authority or the like.

Saturday, 10 March 2012

Why Use Intrusion Detection?

Web apps have an option of something called Intrusion Detection which attempts to identify suspicious behaviour being carried out on the web site. It is not like a firewall which blocks parts of the system that should never be used but more like looking at legitimate traffic channels and seeing if someone is trying to brute force their way in.
Intrusion detection systems vary in cost and complexity but are really trying to stop brute forcing. If someone knows a way into the system without using brute force then ID will probably not detect or stop it. A site recently hacked suggested their site had been hit 26000 times in 6 hours as part of an attack. You wouldn't allow someone to keep kicking your door until it eventually broke down so why don't we use ID more often?
My first thought would be ignorance for the most part. Unless you have managed a web site, and even in some cases if you have, you might simply be unaware of how to check for intrusion and what it even means. A friend worked for a church and even their site was hit sometimes by people who presumably simply wanted to deface the site and might be considered low-risk for attack but it still happened.
Secondly, there is the fear of cost. Unless you are comfortable using some open source software and setting it up, you are prey for companies that can sell solutions which can cost many thousands of pounds, something which many organisations are simply not prepared to spend (understandably).
Then there is a lack of expertise among IT technicians who might have the job of managing a web site but who have limited or no specific web site training. Without training, even if you know roughly what ID is supposed to do, you might not know how to configure it (or configure it correctly).
The simple conclusion is that, as in most cases, you need to risk assess whether you should have, even basic, Intrusion Detection. As previously mentioned, the likelihood is actually quite high of being attacked. Depending on how much faith you have in your web applications and/or web server, you should consider it a very high possibility. You then should decide what is at stake if your site is attacked/defaced/penetrated. You should then take advice on what Intrusion Detection is available, either from your service provider, from commercial outfits and from internet forums. Remember that the advice on most forums is of unprovable worth but with enough opinions and input from people, you should be able to form a valid opinion that means your company won't be mentioned in the news as the one who let someone hit your site thousands of times without doing anything about it!

Thursday, 8 March 2012

Is Security by Obscurity Bad?

Quick answer, yes, if it is the only security you have! But rather than throwing out the baby with the bathwater, let us consider where obscurity is a good thing.
Firstly, if we think of structure, obscurity is not good for security since structure can be discovered. For instance, if you based something like Chip-and-Pin on a secret encryption method which needed to be secret to work (revealing it would enable people to bypass it), then the minute somebody revealed the secret, the whole system would be useless. Openness can help lead to peer review which can help spot flaws that you might never see so is worth the effort. If you cannot think how to design something that will work, even if in the public domain, perhaps you should consider getting someone else to do it!
On the other hand, obscurity for content can help make your systems more secure. For instance, you must out of necessity keep your DB passwords secret. Despite the structure being known and perhaps even open-source, the password is the secret and will unlock the door, albeit hopefully one door on one system. This idea can also be extended to things like database table names and web application user ids. If your site is called facebook and someone is trying to hack the site, they will very likely assume that the userid for database access is called something like facebook, webapp, facebookdb etc and I'm guessing this logic would get you into many systems. If you called your login something obscure like facebook74656372 then it is unlikely that this will be guessed or brute-forced. The same is true of database tables. SQL injection vulnerabilities are usually only serious if they can be used to obtain information, there is less danger from someone who has got a sql inroad to your system but does not know that your user table is called USR1234123_TH rather than User. Of course, you could add the same number or key to the end of your table names but that would just mean if they saw one of them, they would guess the others.
A little random is annoying but it comes down to breaking our fondness for eloquence and ease and thinking more in line with security.
As an aside, I am going back to old-school in my thinking that it is better to have complex passwords that are written in a book somewhere than to have simpler ones just so they can be memorised. Each one should also be different.

Tuesday, 6 March 2012

Another Project Failed?

We often read about failed projects but what does failure really mean? I was reading today about the HMRC IT projects which were supposed to help detect and recover unpaid tax to potentially save £4.5B over 5 years. Needless to say, the National Audit Office branded it poorly performing, "The NAO attributes the delays to a series of factors, including late approval of project design or specification changes, projects being phases in over successive financial years to keep within funding limits, including adjusting the scope of work to fit within those limits, and greater complexity in the delivering the projects than was first envisaged." So basically, nothing new there. It still amazes me that despite the fact that these are well known risks to a project, they are still not managed, seemingly at all. We have all these wonderful Project Management courses and MBA degrees and all these people who are paid vast sums to manage these projects and yet this happens.
These are not hard to plan for in advance. If you know that yearly budgets are relevant to project payments, which they would be for any serious sized project, then you have to factor that in. This means it takes longer to finish perhaps but it should not cost any more within reason. Specification changes is another monster and, again, happens on virtually every project, the number of changes being exponentially proportional to the size of the project. If these changes are unacceptable then the specs need to be nailed down in advance so the developers and testers can crack on and get it finished quickly. If they take too long, they will deploy a project that is already out of date (like electronic MOTs). If changes are acceptable and realistically, they should be, then there should be a way to cost these into the project that increases over time. If your customers sudden requirement to change the main system 2 weeks before delivery relates to a cost and time extension of £50M and 2 years then that cost is made clear to the stakeholders who have the decision about whether to pursue it or not. We should probably accept that sometimes things will get canned.
On the other hand, we often do no root-cause analysis on why these projects fail and the main reason? Usually too much complexity which leads to many unknowns or disagreements in functionality and inevitably massive time and cost hikes. For instance, when looking at electronic patient records, another system that seemed doomed to failure, even if we just created a basic system that recorded, say, appointments, treatment, allergies and drugs for each patient at a very basic level, that would already be miles better than what exists currently and would allow an IT infrastructure to be rolled out and used (which would be much easier to upgrade later if required) and would cover, perhaps 80% of useful functionality. You could then design the system to have pluggable modules so at a later date, you might be able to make electronic X-Rays available in the same system as time and money permits. This allows people to use the system, iron out basic issues and see what they really miss and therefore what should be highest priority for the next development. It also allows an open API to be created and multiple companies to develop multiple components that all just plug in.
I will say it again, simple is good!

Thursday, 1 March 2012

403 Forbidden Error

I got this error on a web site I was testing on my network yesterday. It was one of those times that you think you didn't change anything.
As you can imagine, I HAD changed something so as with most of these scenarios I have to ask what I did between when it last worked and now it didn't.
There is a little clue in the error page which says you are not permitted to list the directory contents. I should have twigged that I was not expecting to list a directory, I had intended to hit the default page.
And that was the problem, I had removed and re-added the site in IIS and had not setup the default page for the site (which is not default.aspx) and therefore hitting the URL without the file name specified caused the error.
Idiot.

Date Cock-ups, Leap Year Bugs and Management

We experienced a problem testing our code yesterday related to some date math which tried to project a date back 18 years to 29th Feb which fell over. It was embarrassing enough but then we had to ask how it happened. If we are honest, if someone was not very experienced, it would be an easy mistake to make. There were no unit tests but even if there were, would we test or be able to test this anomaly? I don't know if the code was code-reviewed but even if it was, would the reviewer have spotted the latent fault? I saw some code recently with a similar bug and I didn't spot the mistake despite considering myself pretty experienced.
In this case, the question has to be raised about the software process and more particularly, the way in which management have or haven't ensured the suitability of this process.
In reality, there is always a danger that something will happen once, the real test of a quality process is whether we allow it to happen again. The first time someone spotted the ease of injecting a date/time bug, there should have been somewhere to hang that knowledge, these opportunities are few and far between and sadly most of us don't think like that. We think, "thank goodness I caught that bug" and it stops there. The only realistic chance of catching something like this which would have easily slipped through code review, unit testing and system testing would be to have learned from someone else's mistake. A simple check added to a code review checklist, "have you considered whether date maths might cause a crash?".
Managers, you have been warned - sort your processes out.