Monday, 30 June 2014

Convert Virtual Box Windows Client to use virtio drivers

One thing that can make a big performance difference when running Windows under Virtual Box is to use the native drivers for the virtio host system. I saw a big difference, but that was told to me by a friend so I have no numbers to back it up!

By default, certainly on my install, Windows uses an IDE disk and Realtek network cards, but these are just adding a fake layer between the host and the Windows client for no good reason that I could see. By using the Red Hat virtio drivers, you are able to talk directly to the host hardware. There are some tricks however that you need to know in order to make this as painless as possible.

WARNING: You really should backup your client before doing this unless you are happy to risk losing everything. None of the problems I had would have deleted the disk contents but are you really prepared if it did?

ANOTHER WARNING: You will need local access to your virtual machine, remote desktop will not always work when changing network cards and IP addresses etc. Of course you can remote desktop to your host, just not the client.

1) Download the iso with all the libvirt drivers on it from here. This is an ios file that needs to be visible by your virtual machine host. Note, you may need to change permissions on the download and you might need to exit and reopen the host interface in order for it to re-cache the files in the relevant location.
2) Mount the iso image onto the cdrom drive of your windows client and then open a command prompt and navigate into the correct cd folder for your build e.g. d:\win7\amd64. You can then use pnputil.exe to add inf packages into the Windows driver cache, which makes installing them quicker and easier. The snytax is pnputil.exe -a filename.inf. Running that will show a security warning but otherwise should show that it is added. I did this for all the inf files in the folder (about 6).
3) Reboot your client and ensure that any automatic driver updates are applied during the reboot. There might not be any, but it should just make sure!
4) Once the reboot has completed, you can now shutdown your client. Do not reboot it, it needs to be shutdown to change the hardware settings.
5) Once shutdown, it is up to you whether you want to do the network cards and hard disk drive at the same time. I would recommend doing them one at a time just to keep problems to a minimum but it is up to you. Read down if you also want to do the disks but for now, we will continue with the setup for the network cards.
6) Open the client settings on the shutdown client and change the network card type from "Default" to "virtio". This is all you need to do before restarting but note that depending on your IP address settings, you might now have lost your original IP address temporarily (Windows will see this as a card change), if you are running DHCP then you probably won't care.
7) Start up the client and check in Device Manager that your network card now shows as "Red Hat virtio" or similar, rather than "Realtek...". Make any changes you need into your network properties such as resetting the IP address to a static value (Note that Windows will ask you if you really want to reassign the old IP address to the new card). You can now shutdown to do the disks.
8) You cannot change the disk type directly because the plug-n-play driver install can only take place after Windows has started booting. To get around this, we create a second disk (of any size, say 1Gb?), set it to virtio and start the client again to install the drivers for the second disk. This takes a little while but if you installed the packages properly earlier from the drivers CD, this should all work automatically and Windows will start up after it is finished. You can check the second disk has worked by starting disk management but it won't appear in Explorer since it is un-initialised and un-formatted.
9) Shutdown the client again, this time you need to change the main disk from IDE to virtio, REMOVE the second disk you just added and restart. It is important to remove the second disk otherwise the client will attempt to boot from it and fail (in my case, the numbering orders the hard disks!). Because the virtio drivers are now installed into Windows for the second disk, it should have no problem booting from the first disk with the same drivers.
10) All things having worked, you should be able to boot up as before and again, check Device Manager to ensure the Disk Drive is now using a Red Hat virtio driver rather than whatever was there before (qemu ide driver?)

If your booting to virtio disk fails for some reason, you should be able to kill off the client, set it back to IDE and be up and running again to find out what you did wrong.

If you end up in an endless network boot cycle and windows won't start, check the hard disk order in your settings and ensure your proper disk is the first one, if it isn't, I think you have to change the boot order in the client bios, I can't see how to do it in the settings.

Unidentified network or public network in Windows under virtio

When running Windows Server under Virtual Box, you might find that the network appears as "un-identified network" and it therefore defaults to the public firewall profile. There are two reasons for this, that are related.

Basically, Windows looks for a default gateway to decide whether the network is a private network or not. If you have forgotten to set this in the TCP properties, or it has not been set by DHCP or you have entered it incorrectly, this will cause the problem.

The other problem relates to host-only networks, which you might use to lock down communication between two servers. Since, again, there is no default gateway or DHCP on a host-only network, Windows cannot identify it. This time, it won't even appear in the networks page so you can't configure it. You need to use a registry hack to tell Windows that the network is trusted and this will cause it to display correctly. Follow the instructions here.

Monday, 23 June 2014

Crontab error - syntax error, bad username

I had what I thought was a really simple crontab file in /etc/cron.d which simply called a bash script in my home directory. I realised it wasn't running and looked into /var/log/syslog to find the following error:

Jun 23 09:39:27 cron[19563]: Error: bad command; while reading /etc/cron.d/backup-files
Jun 23 09:39:27 cron[19563]: (*system*backup-files) ERROR (Syntax error, this crontab file will be ignored)

The file was definitely correctly written, I even ran the command line through an online checker and it passed. Since I was running a script directly, I tried to add /bin/bash as an additional argument to the script, and got a similar but different error:

Jun 23 09:45:24 cron[19591]: Error: bad username; while reading /etc/cron.d/backup-files
Jun 23 09:45:24 cron[19591]: (*system*backup-files) ERROR (Syntax error, this crontab file will be ignored)

Really annoying, I ended up trawling the net for answer, of which there were many, and...

The Solution

The system crontab has an additional column (the crontab that is run by root from /etc/crontab and /etc/crontab/cron.*) which specified the user to run the script as. Since I had copied the crontab comments from my user crontab, they didn't mention this!

So I ended up changing:

0 23 * * 1-5 /bin/bash /home/ubuntu/backup-files

to

0 23 * * 1-5 root /bin/bash /home/ubuntu/backup-files

and it was all happy again!


Saturday, 21 June 2014

Netbeans project from Windows won't open on Ubuntu

I have used Netbeans for quite a while and find it great for PHP development. Recently, I setup a new Ubuntu PC at home to do some dev work and downloaded a Netbeans project from Subversion.

I installed Netbeans from the Ubuntu repos and tried to open the project downloaded from my Subversion repo and it wouldn't see the project to open it, despite the folders appearing to be there.

I went back to work and double-checked that I hadn't missed a file. I hadn't.

It suddenly occurred to me to check the versions and although I was running 7.3 at work on Windows, the version in the Ubuntu repo (for some reason) is only version 7.0.2, which meant it couldn't recognise the project structure for my Windows project.

It was nothing to do with Windows vs Ubuntu. I had to uninstall the old version of Netbeans and install a version directly from the Netbeans web site and it all worked fine!

Thursday, 19 June 2014

URL Encoding, Percent Encoding and what to do with spaces, %20 and +

We constantly stumble over weirdness in the world of IT, partly because of so many competing standards or ideas and partly because of legacy problems that often cannot be fixed because of chicken/egg problems between web servers and clients/browsers.

This one had got me confused and stumped for a while and it is related to the various flavours of URL encoding. The basic idea is that if you want to pass a URL in a URL, it is obvious that the multiple instances of things like http:// in the full string would/could confuse the web server and make the URL unparseable. The solution is that any "reserved" characters that have special meaning in a normal URL can be encoded using a number of the form %HH where HH is a hex number. For instance, you might have seen http:// replaced with http%3A%2F%2F and at the web server end, the reverse is carried out to work out the original text.

The same is actually true of any data that contains reserved characters and is sent via a URL, not just other URLs. For instance, if you generate or send some kind of random code to the web server that could include reserved characters, you would need to do the same thing to avoid the danger of web server confusion. In the case of non-URLs (such as random codes), you might instead choose to encode it with something like Base64 before passing it on the URL to make things much neater. In fact, you could do that when passing URLs but base64 has a size overhead of 33% which would make a long URL noticeably longer so URL encoding (or percent encoding as it might be referred to) is usually the weapon of choice.

So far, so good but there are some questions. What happens if I want to pass a URL that already contains something like %3A in it? Well, encoding it would, as you might expect, replace the percent symbol with %25 and the 3A, which is unreserved, stays as it is so you would end up with %253A. What this means, is that you must be really careful not to multiple-encode strings, since every time you encode, every percent symbol will be escaped again and you would need to match the number of decodings at the other end - although you should only do it once.

Another question, why are there two ways of encoding spaces? Well back in the bad old days, when the mime type application/x-www-form-urlencoded type was specified for passing URL-type data between browser and server or vice-versa, someone decided that they would use normal percent encoding EXCEPT that spaces wouldn't become percentage symbols but + symbols instead (and newlines are normalised). It works, but it is confusing and it smells of a shortcut that should never had been taken. The real problem with it doesn't lie in the fact that it doesn't work but that you can have all manner of compatibility problems. For instance, if you use an encoder that produces output in line with these HTML specs, it will encode spaces as pluses but if there are pluses, they become %2B. In other words, encoding "Hello There+" would produce "Hello+There%2B" which looks strange since + is supposed to be reserved. If you then tried to decode this with a decoder that wasn't designed for application/x-www-form-urlencoded, you would incorrectly get "Hello+There+".

Any encoder that is NOT specifically for  application/x-www-form-urlencoded will replace spaces with %20, which is far more consistent. "Hello There+" => "Hello%20There%2B"

The moral here is to test exactly what your encoders and decoders are doing, especially when you are using data that might but might not contain spaces or pluses, in which case, you might find something works one day and another day it does not. The simplest way is to produce a short test with encoding something like "Hello There+" and see what it produces, if it replaces the space with a plus, test that your decoder replaces the + with a space. If your data contains pluses and is NOT encoded at all, make sure your web server/service/application is not automatically decoding it and replacing the + with a space, if it is, you might have to encode the data even though + is strictly safe to send in a URL.

Tuesday, 17 June 2014

Calling web services asynchronously in Android

This is something I had to do when first creating my PixelPin Android App. Depending on what web service client you use, you might find that you cannot call the web service from the main thread without causing an exception. I ended up using the Apache web service client, since it matched much more closely with a coding style I was familiar with (the Java one is really confusing!) but it is still not a good idea to call a web service from the main (UI) thread for the simple reason that the call will block, possibly for 10s of seconds in a bad network area and that would cause the UI to lock up which is obviously not good.

Below are some code extracts that describe the way in which I call the web service asynchronously using a simple generic class called android.os.AsyncTask, which does most of the heavy lifting to move the blocking code off of the main thread.

The calling Activity

So there will obviously be an activity that needs to call the web service, rather than calling this service directly and blocking, we will start the Async task (shown in a minute) and then finish. For instance, our Activity might call something like this:

CheckUserCanLoginTask task = new CheckUserCanLoginTask();
task.attach(LoginActivity.this);
task.execute(email, getDeviceId());

We will look at the code for the Task in a minute but there are a few things to point out. Firstly, the attach method is what I have used to allow the Task to call back into the Activity when it has finished. You could pass it in as one of the parameters to execute but I think this way is more clear. The execute() function can take a variable number of parameters but they must all be the same type and will be specified in the code for CheckUserCanLoginTask. In my case, String is usually the lowest common denominator so I tend to use String... for the params here. The actual params are not important, in this case they are simply the parameters I need for my task to work. You also don't have to pass any parameters in if they are not relevant.

The Async Task

The async class is generic and takes 3 type parameters, the first is the type of the variable length params list passed to the execute() function, the second is the type of the function that can be called to check on the progress of the task (if relevant, I don't tend to use this for the web service calls) and the 3rd type is the type of the result that is passed from the async function doInBackground to the 'finished' function, which is called onPostExecute. My code for this specific task looks like this:

package org.PixelPin.PixelPinMobile;

import android.os.AsyncTask;

public class CheckUserCanLoginTask extends AsyncTask<String,String,String> {
    
    private LoginActivity activity;
    
    public void attach(LoginActivity act)
    {
        activity = act;
    }
    
    @Override
    protected String doInBackground(String... params) {
        
        return PixelPinWebService.CheckUserCanLogin(params[0],params[1]);
    }
    
    @Override
    protected void onPostExecute(String result) {
        activity.CheckUserFinished(result);
    }

}


There isn't much that is complicated here but note that the doInBackground() function is asynchronous, it is NOT the same as the execute function that you call from the Activity, which starts up a thread to run doInBackground(). Obviously, you could do a whole load of stuff in here and even use the progress functionality if you want to (I use it on one activity that does multiple steps and it returns an int between 0 and 100 to the calling activity). In this simple case, the task calls one method on a static web service class, which handles the web service client etc. In my case, the web service returns a string which is returned from doInBackground() and is then passed to onPostExecute(). You could consume this here but in my case, the data is to be used by the calling activity, which I call using the reference I attached earlier.

Since onPostExecute() is supposed to be called on the main thread, so you can call directly into the activity and any UI elements of that activity but some people have complained about problems with the wrong thread being used (user error?) in which case, simply pass the calls off via new Handler().post(new Runnable() { public void run() { // Code in here } });

Progress

If you want to report progress, the second parameter to AsyncTask is the type of progress. In one of my activities I use an int to report progress. You do this by calling publishProgress(), which takes a variable list of the type you specify for param 2 in AsyncTask, in my case I pass a step number and a percentage. The step number I use to drive some checkboxes and the precentage to drive a progress bar but you can provide only one value or more depending on what you are doing (but all of the same type).

The other change required in your AsyncTask implementation is to implement onProgressUpdate(), which takes the same arguments as publishProgress() and which, you have probably guessed, gets called on the main thread when publishProgress() is called. It is here that you can call back into your activity and cause some kind of progress meter to update.

@Override
protected void onProgressUpdate(Integer... progress) {
        mActivity.updateProgress(progress[0],progress[1]);
}

Conclusion

It is quite easy to get async capability using this AsyncTask and there are other features not mentioned here such as handling the task being cancelled so please look up the docs here: AsyncTask reference

Tuesday, 10 June 2014

Content Security Policy and .Net

Security is Layers

One of the things you lean when working with computer security is that lots of layers is a good approach to making a site secure. You can make some quick wins or reduce the chance of an attack by maybe 90% but to get that warm feeling of security, you need to pile on the layers.

Content-Security-Policy

One of these layers is called Content Security Policy and is an HTTP trick to help avoid cross-site-scripting and similar injection-based attacks. For example, if someone finds how to add a script to a page on your web site, this script could pretty much capture anything the user is doing (including typing login credentials) and send it to another site, without the user even knowing. Since the web is designed to work cross-domain, this will work and no-one will know.

Content Security Policy is an easy enough concept to understand. It does several things but the largest of these is that it tells the browser what domains it is allowed to load external files from. These include common links such as images and scripts but also allow you to specify the policy for fonts, objects, ajax connects, media, style sheets and frame sources. These can either be specified globally in a simple statement like Content-Security-Policy: 'self' or a much more complex policy such as Content-Security-Policy: 'none'; script-src: 'self'; font-src: http://fontsource.net http://fontawesome.net; media-src: https://mymediasource.com

The theory is pretty straight-forward. Anything not specified inherits the default value (none is a useful default) and you can either specify 'none', 'self' and/or URLs to restrict the origin of external files (you are allowed to combine self and URL(s))

Restricting Javascript and CSS

Another feature of CSP is to lock-down inline script and css. Preventing scripts from other domains will not help if someone is able to inject a script directly into your page, rather than just a link to a script, so CSP allows you to set a special flag against script-src and style-src of 'unsafe-inline', which tells the browser to allow inline script and styles. Where possible, you should not allow inline scripts/styles although apart from the work of moving any existing code out of the page, you might well have existing libraries that use inline scripts or styles and which you don't have direct control over. For this reason, you can and should use the report-only option when creating your policy which will tell the browser to report errors but not to restrict the content. This way you can see any problems before you start locking your system down.

A second part, which applies to Javascript, is a very large security hole, also known as the eval() function which allows you to execute the text passed to the function as if it was code. Naturally, this allows all manner of exploits depending on how the user can get the code/text into the page and, again, is disallowed by default when CSP is enabled. You can allow its use by specifying 'unsafe-eval' against the script-src section. Note that in .Net, the validators use eval(), which is annoying and means this measure cannot be used currently (although there is a chance I will rewrite my validators to do something different).

Stitching it Together

Basically, to code this, you need a basic string machine which you set various properties on and which ultimately outputs a single string which needs to be attached as a header to the HTTP response for your site pages. It could be set differently for different pages if required. I have created a .Net helper class to do this, which is available for free on github.

Weaknesses

There are two weaknesses to this system (although these weaknesses are still stronger than doing nothing). The first, already mentioned, is that you sometimes have to make the policy less restrictive to make up for the fact that libraries outside of your control might use unsafe methods.

The second is that if you are using a popular content delivery network (CDN), then it would perhaps be trivial for an attacker to put their evil script on the same CDN and it be allowed by your policy. For this reason, you cannot use CSP as a control by itself, it needs to be part of a layered approach to security. You can also be specific in the path of your URL, for instance, you could use http://maxcdn.com/libs/jquery/ instead of http://maxcdn but you need to include a trailing slash if it is part of the path to the real file (rather than a full path) otherwise it won't work.

Thursday, 5 June 2014

process launch failed: failed to get the task for process

I'm starting to wonder if there is an increased chance of depression for those writing iOS apps (and possibly MaxOS). XCode is very old-fashioned and even where it attempts to automate things, it still does so in a way that is not entirely obvious.

The whole area of provisioning profiles and code-signing is incredibly frustrating, especially when it doesn't work but many of you have probably seen the above error at one point or another, probably when trying to debug on a local device.

In my case, I had mucked up a project file while trying to get everything into source control (the irony of breaking code while using source-control) and I thought I had repaired everything and got it building again until I got this error.

The project had run before on this device so I assumed I had changed something by mistake but all the help on the web seemed to imply a wrong selection of provisioning profile so I deleted the current ones from ~/Library/MobileDevice/Provisioning Profiles and then re-built in XCode whieh should have re-downloaded them (although one of them didn't) but it STILL didn't work (same error).

After reading the Stack Overflow posts again several times, I eventually twigged that although I had chosen "none" for the provisioning profile for debug, I had a deployment one selected for release BUT the scheme was setup to "Run" on the device and not "Debug", in other words, the scheme was telling XCode to run the signed release version instead of debugging it.

WHAT? In Visual Studio, you can change a drop-down between debug and release and then either "Run" the project or "Debug" the project. Nice and clear. XCode only seems to have one go button and it is not at all obvious what this will do. Even the debug menu does not have a "Debug" command. You have to go into Product -> Scheme -> Edit Scheme, choose "Run" in the left-hand-side, choose the iOS device in the Destination at the top and select for it to be debug rather than release. As long as Archive is set to Release, then it will deploy fine to the app store.

People like Macs and iPhones and in some ways, the toughness of using these tools to develop is just how it is. Sadly though, I can imagine many developers ditching iOS for Android and Windows phone development which, for me anyway, seems MUCH easier to do and which will just push up the cost of those iOS developers that stay with it - nobody wins. Apple really need to sort these tools out. They already show their cracks with only two sizes of iPhones and a handful of iOS versions, this will only get worse.

We actually paid a company to produce an iPhone app for us initially. What they produced was generally substandard and they really struggled with any of the functionality that was not just buttons and pages. I am starting to see why they struggled with so many things and understand some of their decisions, even if I would still have done it differently myself.