Monday, 29 October 2012

ASP.net Application and Request Paths Explained

I want to do something simple. Using the current request, obtain the absolute url of the site root and then add another file path to it. For example, taking the request which might be http://www.example.com/myapp/directory/current.aspx, I want to obtain http://www.example.com/myapp and then I want to add on e.g. /otherdir/otherfile.aspx to be used in a password reset email.
Anyway, not really doable out-of-the box but then I realised what I really wanted was a reference for all the path relevant elements of the request object so I could work out which bits I needed to glue together. Well, here it is:

HttpRequest
->ApplicationPath = "/MyApp"
->AppRelativeCurrentExecutionFilePath = "~/SubDir/Current.aspx"
->CurrentExecutionFilePath = "/MyApp/SubDir/Current.aspx"
->CurrentExecutionFilePathExtension = ".aspx"
->FilePath= "/MyApp/SubDir/Current.aspx"
->Path= "/MyApp/SubDir/Current.aspx"
->PhysicalApplicationPath= "C:\\inetpub\\wwwroot\\MyApp\\"
->PhysicalPath= "C:\\inetpub\\wwwroot\\MyApp\\SubDir\\Current.aspx"
->RawUrl= "/MyApp/SubDir/Current.aspx"
->Url
->->AbsolutePath = "/MyApp/SubDir/Current.aspx"
->->AbsoluteUri = "http://localhost/MyApp/SubDir/Current.aspx"
->->Authority = "localhost"
->->DnsSafeHost = "localhost"
->->Host = "localhost"
->->LocalPath = "/MyApp/SubDir/Current.aspx"
->->OriginalString = "http://localhost:80/MyApp/SubDir/Current.aspx"
->->PathAndQuery = "/MyApp/SubDir/Current.aspx"
->->Port = 80
->->Scheme = "http"
->->Segments
->->->[0] = "/"
->->->[1] = "MyApp/"
->->->[2] = "SubDir/"
->->->[3] = "Current.aspx"
->UserHostAddress= "127.0.0.1"
->UserHostName= "127.0.0.1"

Wednesday, 24 October 2012

FAQ - Your friend and your foe

How many of you have a Frequently Asked Questions page on your site? If not, you should because it saves people from asking common questions to your support team which takes valuable time.
Now, how many of you who have a FAQ actually fill it with frequently asked questions and how many of you invented some random possible questions and don't really maintain it? This is bad. Why? Because nothing is worse than trawling a load of very specific (and almost certainly not frequently asked) questions to only have to contact support at the end and ask the question you could have asked 10 minutes ago. This damages your reputation but it also puts more burden on your staff and your hardware. In many cases, if I reach a FAQ that has more than a handful of questions, I will very likely email support anyway because is simply easier for me than trawling through pages of junk - many people will do likewise. The answer might be in there but I shouldn't have to expend masses of effort to find out. Doesn't bother me if a human has to answer it for me.
An example today, I went to Virgin Mobile and was looking for what I thought would be a FAQ, which was, "how do I get a micro SIM replacement for my new phone" since my current SIM is full-size. Seems likely to be a frequently asked question but couldn't find it. However, an example of a supposed FAQ that is on the site is, "What is Phone Fix?". Now I can't honestly imagine lots of people calling Virgin and asking, "What is Phone Fix?", it's clearly some kind of sales type tactic which is poor because about half of the supposed phone problem FAQ include the phrase "Phone Fix".
FAQ like all of your web site features should include all the love and care of any other feature that is heavily used. It should be managed, data should be fed back from call centres to produce actual FAQs and these can even feed into things like mailshots etc to ensure people are 'trained' so they don't need to ask these questions.
Anyway, back to work...

Friday, 19 October 2012

Why Consistency is not just for Others!

One of the things that causes a lot of pain is the way in which software is implemented differently by different vendors on what is supposed to be the same standard. Browsers all treat HTML differently (some more than others!). Some browsers interpret mistakes strictly, others are more tolerant. Look at OAuth and you get similar issues with inconsistency which make for headaches and development costs.
Anyway, I realised it is very easy to point the finger at others and their shortcomings without realising that we also need to work consistently! After all, the others are just people like me who work for other companies. They are probably not massively more or less strict or more or less striving for the right thing.
In other words, if we want to change the world, we need to look at the man in the mirror. Have you ever asked, for example, why some people require password changes and others don't. Some sites need a strong password, some don't care, some sites allow all manner of symbols in passwords and others only allow letters and numbers. These systems were all written by people like you and me and I suspect that rather than looking into the standard practices and copying them (or even better, using a library someone has already written!) what we do is either reiterate what we learned parrot fashion without understanding it well enough or otherwise we put ourself in the position of ultimate authority and decide what is right and wrong. Even worse, sometimes we get ordered around by our bosses who might not even understand but that is even more reason to rely on established knowledge that we can refer to rather than just arguing with our boss about what we think is right (and the boss usually wins that one!).
Anyway, if we want consistency, we must also apply consistency then perhaps one day....nope, I don't think that will ever happen.

Why API documentation is crucial

Documentation is one of the things that a lot of developers hate because it is not code. It does not do anything or move anything but documentation is really important, not just for remembering why you did something but also, especially in the world of published APIs, so that other people know how to use your software.
One of the cases in point is the DotNetOpenAuth library. A valiant open source library which allows, supposed easy, functionality for various federated security protocols. The problem? The documentation is appalling. The basic API docs, are about the minimum you could possibly publish and provide no help. Many 'helpful' comments on forums about the library talk about starting with the samples and working from there except, of course, the samples are not well documented and in some cases are quite extensive which means it is very hard to know what each part does and therefore how to translate it to your own problem area.
There are simple controls which no doubt for many OAuth/OpenID clients make things pretty easy but look at writing your own provider and you are basically stuffed.
The result? All of that hard work is basically in the bin for me. I had to implement OAuth2 from scratch in code using basic Http functionality and no doubt missing certain elements that the library might perform (although I'm pretty sure I meet the subset of the spec that I am interested in). It's a bit like scrapping a Ferrari because you can't work out how to start the engine.
In the case, in my opinion, the whole piece of work is largely worthless and the only bit I have made use of is a Microsoft extension library which makes things much simpler for those of us who are not federation experts.
So be warned! If you want people to use your libraries and APIs, you MUST provide good documentation. Instead of answering individual queries on forums, spend the time describing the answer in a way that can be posted onto a FAQ/API docs/Examples page.

Tuesday, 16 October 2012

Use generic linq extensions on non-generic collections

I was using an old (non-generic) .net collection called ChannelEndpointElementCollection to find a specified endpoint from web.config and then to pull out the details for one of them. I only had one initially so I just used mycollection[0] but then I added another endpoint and decided I didn't want to require the correct order of elements in order for this to remain working, I needed something like collection.First(p => p.Contract == "Contract1") which is when I realised that the generic extension methods like First, Select etc. do not work on non-generic collections because they cannot infer the type from the collection.
Fortunately, this was known by the powers-that-be who have added another extension method OfType<T>() which allows you to specify the type of the collection and therefore return a typed generic collection which can then be used with the other Linq extension methods:


var theAddress = endpointCollection.OfType<ChannelEndpointElement>().FirstOrDefault(p => p.Contract == "Contract1").Address;

Friday, 12 October 2012

Tunnelling RDP over ssh using Putty and Private Keys

Well this post is about several things rolled together. First let me introduce the "why".
RDP is very useful for managing Windows remotely but is not secure so should not be used directly across the interwebs. I want to tunnel the RDP across a secure protocol, in this case ssh makes a nice choice because tunnelling is something it does quite easily. In fact you can tunnel most protocols over most others but there isn't normally a reason to do that! The tunnel provides protection for my RDP protocol.
What tunnel involves is pointing your client (in my case a Remote Desktop session) to a port on my local machine and then have ssh listening on that port. ssh then talks to an ssh port on the foreign machine and then redirects the other end of the tunnel onto whatever port my client actually needs to talk to, in my case 3389 which is the standard RDP port. The advantage here is that I get ssl type security for the connection but also, I do NOT have to open 3389 on the firewall on my foreign machine.
In my case, I have made another subtle adjustment in that rather than using the default port 22 for ssh on my foreign machine, I have decided to move it. Quite simply, ssh ports are often hammered by attackers so moving it away from port 22 avoids the potential denial of service. I have chosen 7888 but you can choose whatever you want.
Now onto the implementation. Because I am running Windows, there are a few choices in how to get ssh functionality (it is not built in). I enjoyed running something called Cygwin which is a linux style prompt running under windows and which allows me to set these things up in the same way as I would on Linux. Unfortunately, it seems to have stopped working properly so I now use Putty which is a very common tool and runs well on Windows, the only slight downside is that I am using a private key to access my remote system and putty has its own slightly different format for keys which means I have a few more steps to perform before a simple setup in Putty can be carried out. If you are not using private keys for ssh access to your foreign machine, ignore the steps involving putty gen and the private key settings in Putty.

  1. Download putty and puttygen
  2. Run up puttygen and select Conversions->Import key from the menu. Select your private key file which will probably be in .pem format.
  3. Once the key imports, click save private key and this will save a .ppk file which is putty format
  4. Run up putty, I would suggest immediately saving a named session - like "Webserver Tunnel" and then remembering to save it as you make other changes. This is just to avoid mucking something up and having to start from scratch.
  5. Choose ssh on the first page and then enter the url for your remote ssh machine and the port to use (remember I used 7888). The ssh machine in my case is NOT the same as the windows box I am connecting to, I use this ssh proxy because it is a Linux box and is easier to setup for ssh.
  6. Click on the Connection/ssh/auth link and choose your ppk private key (if applicable)
  7. Click on the Connection/ssh/tunnels link and add a new entry. The source port is the port on your local machine that you want to tunnel from. This can be whatever you want but keep the number high to avoid other ports that might be in use. The destination is then the ip address and port to use at the foreign end of the link. This might be the ip address of the foreign ssh server but in my case it is the ip address of the windows box and port 3389. Once the ssh tunnel has been created, this forward works because the proxy ssh server has been given specific firewall access to the windows box port 3389 - I would not be able to connect to it directly.
  8. Save the settings and then click Open. This will log you in and you should see a window even though you are actually tunnelling. If you did not use a private key, you will be able to login here with your credentials. Once the window is open, the tunnel will be running and you can connect your client, in my case an RDP session, to localhost:localport

Monday, 8 October 2012

Cannot Establish Trust relationship with site....

I saw this earlier and unlike most times I get an error, the answer was fairly obvious to me. I was calling a web service from a web application and they both lived on the same server and require SSL from a certificate. The problem was that my endpoint was set to use https://localhost/.... and the SSL certificate is actually for my domain name. I had to change the endpoint to include the full public domain name https://www.example.com/ which still resolves to the localhost but which then matches the SSL certificate name.

MySql Access from Windows

The MySql .Net connector is a straight-forward way of connecting to MySql databases which works much in the same way as System.Data.DbConnector which means that it is easy to port an application from Sql Server to MySql. However, the connection string takes a small tweak to use the standard fields like server= and database= (I think!?) and then you need to connect the web server to the database server in the most secure way. You will need to do the following:

  1. Create/establish your MySql server (I use the one on Ubuntu so it is nice and slick) - in my case this is on Amazon Web Services.
  2. Firewall your database server so it only allows ssh access (I change the port to be something other than 22 to avoid it getting attacked). Also, allow access to port 3306 (MySql) but ONLY for your web server's IP address.
  3. If you want to connect directly to the database from your local PC after you have firewalled it then tunnel the connection in using ssh: ssh -fCNp @ -L ::3306 -i .pem and then connect to 127.0.0.1: with MySql workbench (or whatever tool you are using).
  4. Edit /etc/mysql/my.cnf on your db server so that the MySql daemon is bound to all interfaces by setting bind-address to 0.0.0.0 You can only set it to ALL or a single ip address and since often you need it to be bound to 127.0.0.1 for tunnelling in, you have to go for the ALL option. If your db server has multiple interfaces and you don't want it bound to all, then use a firewall like firehol to lock down the interfaces from mysql traffic.
  5. You will want a user that is locked down to only access the database for your web application - DO NOT USE ROOT. If you have multiple apps, I recommend a different user per application so that if one was hacked, the damage is limited. These users should not be able to do anything "out of band" like drop tables, create databases etc and ideally, if your app can work with only stored procedures, it should only need execute permission and nothing else.
  6. You can try a simple connection from your web app, but if that doesn't seem to work, download MySql workbench and attempt to create a connection to the MySql server. This will tell you whether the connection is valid or not and should give you some clues to what might be wrong. For instance, you might need to open outgoing traffic on the firewall to allow Windows to talk to the MySql server - I can't remember if I had to do this or not.

Thursday, 4 October 2012

Setting up digest authentication in Apache on Ubuntu

I've set up a noddy demo site on Apache but didn't want it open to the world. Although you can do this with HtAuth and basic auth but then the password is sent in the clear so is very easy for someone to snoop into. If you use digest authentication, it sends the password hashed and most (all?) modern browsers support this so I thought I would enable it.
I am working with an Ubuntu appliance on Amazon web services so it comes with a few basic tools and I installed apache myself from the command line. The following lists the steps you need to use in order to enable digest auth on a directory.
Firstly some context. I wanted to use a simple alias in the url (/demo/) to point to a differently named directory in my home directory (/acmedemo/) and lock it down to a single web user called demouser. Also note that the default username for the Ubuntu appliances on aws is ubuntu.

  1. Run sudo a2enmod auth_digest since it is likely it was not installed in the base install
  2. Run htdigest -c pwdfilename realmname demouser NOT as sudo in your home directory. The realm name can be anything but will need to match the apache config for this restricted area. This will ask for a password for demouser and then create a file with the user name, realm and hashed password in it. If you want to add additional users, run the same command without the -c
  3. Put your web site into a sub-directory of home (or anywhere else but this is good enough and easier to backup!), in my case /acmedemo/. This means the first file will be /home/ubuntu/acmedemo/index.html
  4. Edit /etc/apache2/sites-available/default (or other server configs, these can be set locally in htaccess and other places but this is the simplest case). Add in the following section:
 Alias /demo/ "/home/ubuntu/pixelpindemo/"
    <Directory "/home/ubuntu/pixelpindemo/">
        Options Indexes FollowSymLinks MultiViews
        AllowOverride AuthConfig
        Order allow,deny
        Allow from all
        AuthType Digest
        AuthName "pixelpin"
        AuthDigestProvider file
        AuthUserFile /home/ubuntu/basicpwd
        Require user demouser
    </Directory>
The settings should be fairly obvious and the names need to match what you have done. If you are struggling to get it working, from experience, start with what I've done and just change one thing at a time. Changing the password filename and the realm and the web user etc. all at the same time makes it harder to debug. If you want to require any valid user, change the last entry to Require valid-user Once you're done, run sudo service apache2 restart and if you get any errors, use tail /var/log/apache2/error.log

Monday, 1 October 2012

Call to undefined function imagecreatefromjpeg()

Lovely error in Wordpress that exhibited itself as missing images in posts. Because I had been oiking around with permissions earlier, I assumed it was something related to this but nope.
I ended up using tail /var/logs/apache2/error.log and noticed the above error. The only reason I thought to look here was by directly typing the image url (...wp-admin/admin-ajax.php?mod=img&action=getProdImg&pid=1235&for=big&imgId=58) into the browser and noticing there was a 500 (internal server error).
Anyway a quick search on t'interweb and it seems that the library for this function - php5-gd is not installed by default in my Ubuntu Lamp installation and therefore the method is missing.
Used apt-get to install the library, restarted apache2 and away we went....

Wordpress, permissions and plugins

Wordpress is great for a quick site construction. I downloaded and installed it but as per most of the things I do, I make little changes as I go along to try and make it secure and this can cause problems down the road.
Specifically, I changed the ownership to me:www-data so that I owned it but Apache was in the group for all the files. I then set the files to rwx for me, r for group and nothing for others. I also change the directories so that they were not readable or executable to group so no-one could list directory contents. I realised early on that during the install, the web server would be accessing the root folder so I allowed group to also write here.
I then tried to install a plugin. To avoid the automatic install, which requires ftp access, I extracted a plugin into the wp-content/plugins directory but when going into plugins, it was not listed. The reason, as you might have guessed, was that www-data needs to be able to read the directory contents so it knows what plugins are available. I changed JUST plugins and its descendants to be group readable and then it all worked as expected!
So if plugins are not working in Wordpress, ensure:

  1. You are using the correct folder (wp-content/plugins)
  2. That your directories are not too deep e.g. /plugins/myplugin/config.php and NOT /plugins/myplugin/myplugin/config.php (which might occur if you have unzipped the plugin into a new folder)
  3. That your permissions on the folders allow the web server identity (www-data for apache2) to read and execute the directories (not execute the files!)