Sunday, 7 January 2018

Make Yii2 ListView play nice with UI frameworks

This is probably quite a common scenario: you are using a framework control, in my case a Yii 2 ListView, and trying to render a list of items that will form a coherent grid on the page.

The easiest way to get a grid is to use your UI grid - I am using Ink: http://ink.sapo.pt/ and I like the fact that it is slightly more square than Bootstrap and looks imho more modern as a result.

Anyway, if you render a ListView by default (and I guess the same in other web frameworks) it tries to help by wrapping items in divs with a class and id, which obviously allows CSS flexibility but in my case, breaks the grid selectors from Ink, which are looking for direct descendants of the Ink grid (rightly or wrongly) and so it breaks.

In the case of the ListView, however, you can disable these divs by using options (for the whole list) and itemOptions (for each item). Of course, as well as removing them, you could customise them if that is what you want. What I ended up with is simply:

<?php echo ListView::widget([
    'dataProvider' => $dataProvider,
    'itemView' => '_lessongroup',
    'summary' => false,
    // The following are the two important lines for this example
    'itemOptions' => ['tag'=>false],
    'options' => ['tag'=>false]
]);?>

It's always worth having a dig through the code of these framework controls, it is usually fairly easy to find out whether code is always rendered or whether there is some way to remove or override it.

Google might also be your friend!

Tracing area "Rewrite" is not recognized

What a pain! Windows 10 has done one of its "feature updates" and now my local IIS sites are not working. You get the rather abstract error above for a site that worked a few days back.

It basically means that a module that would normally handle an IIS feature is not working in some way. Obviously, it might not be installed, but it is more likely that it has been broken by an upgrade. IIRC, if the module is not installed at all, you get a different error.

The event log has the same error plus another equally unhelpful one: "Trace provider or tracing area name 'WWW Server' is not recognized. Check the section for currently supported list or providers and areas." and "FailedRequestTracing module encountered problem while reading configuration. No logs will be generated until this condition is corrected. "

Anyway, a quick Google search and I decided to uninstall URL Rewrite (which I assumed was the module that was likely to be the problem) and then reinstall it via the Web Platform installer.

After doing only that module and without a restart, the site now loads!

Thursday, 4 January 2018

ElasticSearch, LogStash and Kibana (ELK) on Azure App Services

Edited: 5th Jan

ELK is a nice logging stack that provides a log sanitizer called Logstash, a NoSql style database called ElasticSearch and a graphing tool called Kibana. It is Open Source and most of the tooling is free to use. For many small companies, this model works well until we have enough money to get paid support or larger hosted systems.

There are a couple of problems, however, when trying to extend its use to App Services:
  1. A lot of .Net tutorials are out of date
  2. Azure has changed a lot over the past few years in terms of what is and isn't possible
  3. It is easier if you only want to use a logging framework e.g. log4net but harder if you want to grab IIS logs directly
  4. Lots of solutions involve external storage, which is more cost, especially if you need a dedicated server just to parse the storage for log files.
  5. Lots of the solutions don't look like they would scale well
  6. There are some old and some dodgy looking plugins for things like log4net but these are specific to explicit logging and don't work for things like IIS or other system logs. It is also hard to know if there are any performance problems with these.
This is my attempt at documenting how I have nearly got this all working using the latest version of LogStash (6.1.1) and ElasticSearch.

I will update this article as I have carried out more steps!

Elastic Search

Elastic Search is fairly easy to install using the instructions here (use these to ensure you get the latest versions). These are for Ubuntu/Debian 16:04 and as per good-practice, you should try and use a dedicated server (I'm using a VM in our office server). This should have at least 16GB and initially, it should be setup as 8GB for Java and 8GB for Lucene according to this article. I haven't bothered with that for now, but it will be critical in larger production environments.

The instructions talk about tweaks you can make to elasticsearch.yml, I edited the following:

  • cluster.name: something_useful
  • node.name: node-1   # Could be whatever but for now I am just using 1 node
  • index.number_of_shards: 1   # In my case, 1 shard and no replicas
  • index.number_of_replicas: 0
(Edit: Removed indexes which can no longer be set in newer versions of ES)

I did initially change the network.host setting (to my interface IP address - it is localhost by default) to ensure I could access the database from outside of the VM. I won't go into details here but clearly you need to decide what needs to be able to route to this new box, what DNS needs setting up, tunnels, port-forwards etc.

Using the default port, I first tried curl -X GET 'http://localhost:9200' to check that it worked correctly (it did!) and then ran the same command from another machine on the network to ensure I had setup the VM networking correctly (as always, do things step-by-step to make it easier to debug problems).

I then started thinking about security and decided to proxy Elastic Search behind nginx for a number of reasons.

nginx proxy

There are proxies other than nginx but it is lightweight, new and well-supported so it made a good choice for my ubuntu server. I installed it from ubuntu repos and then modified the config to provide the following advantages:

  • Basic authentication, which isn't supported by ElasticSearch without their paid x-pack tooling. This allows me to only allow access to users with the correct password
  • SSL termination. Not setup yet, but if nginx is on another server, it can be dedicated to terminating SSL and proxying onwards to Elastic Search over plain http
  • Re-use of connections for clients that don't support persistent connections
  • Locking down of various parts of the elasticsearch API using location tags.
In my case, I followed the instructions here and used the bits I wanted and modified the default configuration (since I won't be running other sites on this box):

upstream elasticsearch {
    server 127.0.0.1:9200;
    keepalive 15;
}

server {
    listen 80 default_server;
    listen [::]:80 default_server;

    # Look at apache-utils package to use htpasswd to generate this file
    auth_basic "Protected";
    auth_basic_user_file /etc/nginx/passwds;

    location ~* ^(/production)|(/_bulk) {
        proxy_pass    http://elasticsearch;
        proxy_http_version 1.1;
        proxy_set_header Connection "Keep-Alive";
        proxy_set_header Proxy-Connection "Keep-Alive";
        proxy_redirect off;
    }

    # Block anything not specifically allowed
    location / {
        return 403;
    }
}

Note I haven't enabled SSL yet and there is another block to allow the root location, used by LogStash. Since I didn't know how to combine this with the _bulk regex, I just duplicated the block and used location = /

Once again, I reloaded nginx and checked that I could now access elasticsearch from outside of the server on port 80 and NOT any more on port 9200. I will switch the Elastic Search server firewall on at some point to only allow 80 and 22 but for now, the server is internal so not critical.

Posting to Elastic Search

Before cracking on with LogStash, I decided to ensure that I could post a message to ES from outside of the server using Curl on my local Windows box (using Git bash if you want to know!) using the command from the previous installation instructions: curl -X POST 'http://myserverurl.local/production/helloworld/1' -d '{ "message": "Hello World!" }'

This worked and proved the basic mechanism was all good.

Logstash

Logstash is the application that can read from a variety of sources and then format messages into a format that works with ElasticSearch. In fact, you can use it without elastic search but it is designed to work well so without re-inventing the wheel, I decided the first step was to set this up on my Windows machine and start by outputting IIS log data to ElasticSearch (since this is what I will eventually do).

I find an article here but unfortunately, it was based on an older version of LS, so I have listed my instructions below which need to be used in addition to the instructions linked.

  1. I didn't bother putting all the server logs into a single file since in my ultimate usage, there will be only one site whose logs I need. I simply ensured a single local site was outputting to a specific location and ensured all the boxes were ticked.
  2. I ensured my JRE 1.8 was up to date (it was already installed)
  3. I added a JAVA_HOME environment variable pointing to the JRE root directory
  4. Once Logstash is downloaded DON'T put it into program files, since the batch file has a problem with path spaces. I put it instead into C:\Elastic\Logstash-6.1.1
  5. The memory tweaks are possibly not needed, they are not in logstash.bat but config\jvm.options and are set to 1G by default
  6. Note that the conf file you need to create must NOT go into the config directory but a separate directory (called conf if you want). This is because logstash will cat all the files in the specified directory together to form a master config file.
  7. I copied the example config which was mostly correct but I did need to make some changes since the supplied example does NOT work with the latest versions of LS.
    1. The type at the top is used for the index name in ElasticSearch so choose something you are happy with
    2. The path obviously needs to match where your logs are located
    3. The match line was correct!
    4. The syntax of event["name"] is deprecated and causes an Error. You need to change these to event.get("name") or event.set("name", value).
    5. I removed the dns block which attempts a reverse lookup of the ip address. I don't imagine this is particularly useful or efficient except in a corporate environment.
  8. The format of the elasticsearch output block is incorrect
    1. Remove embedded. It is invalid
    2. host should now be hosts and is like ["http://yoururl:8000"] (it takes a number of combinations of format but is basically a URL)
    3. port and protocol should go in the hosts entry, they cannot be specified outside
    4. Set index to use the correct format. You can use an autoindex style so that a new index gets created for e.g. each month in this example. This makes searches on a month faster at the cost of large indexes, whereas doing e.g. per day would be slower for month queries but would cause smaller indexes which would be better for lots of daily queries.
    5. If you are using basic auth, you need to set user and password appropriately.
I didn't use the Windows service approach since I will be installing it on App Services but to test it, I first used the stdout output and commented out elasticsearch. When you run the batch file, you need to run .\Logstash.bat -f ../conf from the command line (in the bin folder). The argument agent is now illegal (and not needed).

The actual output might take a while and depends on whether your site has logged anything. IIS logs are fairly low priority and you might have to hit the site a number of times to cause something to happen in the command window (you should see json documents displayed).

If you get an error, the first error line should have the details in it, the subsequent ones are likely to have non-useful information like call stack information.

Logstash to ElasticSearch

I then decided to comment out stdout and put elasticsearch back in. This didn't work initially because I realised that Logstash attempts to call the root of the ES API in order to get a heartbeat and initially, I didn't have this allowed in my nginx config.

Once I added that, I then realised it was calling /_bulk/ instead of /production/ so I also added that in. After restarting nginx, Logstash automatically got going again and added the records into elastic search.

I tested this by running this in a browser: http://myserverurl/production/_search?pretty=true just to ensure that there were records returned.

Installing Kibana

We actually use Mosaik to aggregate dashboards so I am hoping that I can output what I produce in Kibana to Mosaik but it won't be the end of the world if not.

Kibana is produced by Elastic and so it plays nice with the data in ES and can be used to produce easy graphs like counts of certain events, frequency over time etc.

In good-practice again, I have created a separate VM for Kibana. This allows for isolation and allows both Kibana and Elastic Search to be independently scaled at a later date as the company grows. I did this on the same VM host so I have chosen to use a host-only network for Kibana to talk to Elastic Search. This allows me to expose Elastic Search to the host-only network and localhost but NOT to the public facing interface.

Installing Kibana is from repository, like ElasticSearch.

Once installed, I attached the VM to the same host-only network as the ElasticSearch server so that it could have host-only communication. This means I edited my ElasticSearch.yml file to include my host-only IP address in the array of network.host so that I can contact Elastic Search directly from the Kibana box.

I didn't change many settings in /etc/kibana/kibana.yml:

server.host: "0.0.0.0"  # This server is not visible outside of the company so no need to tie down web access
server.name: "whatever it is - used by the software for display purposes"
elasticsearch.url: "http://172.16.0.46:9200"     # 172.. is the host-only ip address of the ES server

I am not too bothered about SSL internally on the host-only network but I might eventually link to elasticsearch via nginx and therefore need to set the basic auth (which I can then use to lock down the endpoints that are allowed for Kibana differently than those that are used by external logstashes.

I had installed an old version of elastic search from earlier instructions, so I ended up purging the lot and installing again, since I was seeing all manner of errors caused by a messy upgrade. I therefore ran my local desktop LogStash to populate the ES database again before attempting to use Kibana.

Basic Kibana Usage

I don't have space to go into massive details - mostly because I don't know enough - but once Kibana is installed, you should be able to go to http://yourkibanaserver.com:5601/ and if everything has gone OK, you should see a neat home page. If not, you will see a grid with any warnings or errors (which was where I noticed the old version of ES and had to fix it!).

If you have data, which I did, I clicked the button on the top right to create an index. An index is a dataset in ES terms and will match the index that you created in LogStash when inserting the data into ES. You can have separate indexes for separate data like IIS logs, app logs etc. and I assume you can join these indexes although I would expect joins to be expensive compared to queries in a single index.

In my case, I have a single index called production so I typed that into Kibana and it instantly told me it had matched an index in ES (which was nice!)

I then decided to try some visualisations. This was really easy (but watch out for the time range filter in the top-right of the screen!). Click Visualize in the menu and choose a graph type. I started with metric which is a basic way of counting stuff. Choose your index name and it should default to showing a count of the items in that index. If you change the time range in the top-right of the screen, the number will change directly. If you want to change what is being counted, you click the blue arrow next to the Metric name in the left, which will reveal a number of aggregation properties. You can also add filters at the top, for instance, only those that equal /api/request1, and these can be individually applied and switched off (but don't accidentally delete it!). You can save your graph in the top right and it will be added to the list in the Visualize page.

I then tried a vertical bar chart, chose my same index. Again, it defaults to a useful Y axis (which is count) and then you have to tell it what to use for the X axis. You choose this from the buckets list and then if you choose Date Histogram and the relevant timestamp field in the data, Auto Interval works quite well. Then Hit the play button under the index name in the top-left of the designer to apply the changes. If you change the time range, the X axis will automatically re-scale, which is really useful.

Next, I want to use WebJobs to pull IIS logs from my App Services.

WebJobs

WebJobs are a new way in Azure for applications to run startup or background tasks. I am going to write one to run LogStash to sit and watch IIS logs and attempt to push them to my corporate ES database and see whether they start coming in...

I have realised that LogStash itself is way to big and bloated to run on an App Service. The files themselves are over 100MB (since they include jruby) and require Java to run - which might not be a massive problem. So.....We can use FileBeat instead. It is much smaller than LogStash and runs without external modules as an exe (30MB). There are various modules that might not be required but the bulk of the size is the exe so it looks like the ticket....BUT....

....Filebeat has nowhere near the functionality that LogStash does to parse and modify log entries. This is fine if your log data is already rich enough but, for instance in IIS, where the log data has no context and where LogStash can do this, FileBeat cannot so what I need to do is create a web job that uses FileBeat to simply watch the log files and then forward them to a LogStash server, which also can be run in our office server environment, and which can do the parsing of the log files, converting them into decorated objects and then passing them directly to ElasticSearch.

Installing and Testing Filebeat locally

To continue with best-practice, I downloaded a zipfile for Windows with Filebeat in it and extracted it onto my local machine. I then edited the default filebeat.yml config file (this is what filebeat looks for by default so might as well use this one!).

I decided to test it reading some real Azure IIS logs (where we don't have control over what is logged - I don't think) and write them to the known-working elastic search API. I downloaded a couple of IIS logs from an Azure App Service and put them into c:\temp and then edited filebeat.yml to set the following:

  • set enabled to true for the log prospector
  • set the log paths to include c:\temp\*.log (note paths is an array and needs the - for entries)
  • set exclude_lines to ['^#'] to exclude IIS comments
  • set index.number_of_shards to 1. Not sure if this is relevant but I only have 1!
  • setup the output for elasticsearch to include my (now https) API endpoint and also setup protocol to https and the username and password for basic auth. Note that basic auth only makes sense over https otherwise anyone on the network can read it!
In preparation for the web job (which according to this tries to run something called run.bat/exe etc. and created a run.cmd in the same directory as filebeat.exe. run.cmd has to be in the root of the zip file I will upload to azure to run filebeats for me. Inside run.cmd, I simply have .\filebeat.exe (note it will look for a default config file in the same directory, these can be overridden if needed)

There are now two jobs I can do in either order. I need to setup LogStash on my central server and get FileBeats to pass logs via LogStash, which will put them into a useful format before storing them in Elastic Search and secondly, I need to test that App Services will run my cmd executable from the zip file and send the logs successfully over the internet to my database.

I'm going to setup logstash first because that is internal to my network and safer to play with.

Setup Logstash Centrally

As mentioned, filebeat is lightweight enough for Azure but doesn't include the "grok" of logstash to transform log entries into decorated objects for Elastic Search. Rather than create another cloud server, I can instead create a central one at the office and have the multiple instances of FileBeat pass their messages to the single server for processing. Eventually, I guess, I can scale out the number of logstash instances if needed.

It generally makes sense to keep these things on separate servers. Sharing servers means that you might not see some problems now that you will see if you ever do want to separate them out. You could use Docker if you want slightly more lightweight servers but these programs run on Java (urgh) and therefore need plenty of beef to run correctly.

TBC



Wednesday, 6 December 2017

MongoDB Insert from Mongo Shell but not from C# API

Another annoying one that makes sense after you work it out!

I tried to run a console app which was inserting documents into a MongoDB database via a tunnel. The app had previously been used successfully on another DB that wasn't on a tunnel and didn't have any auth so I assumed it was related.

When I inserted, I got the following error:

Command insert failed: not authorized on resources to execute command

(resources is the name of the database I was using)

But when I ran an insert directly in Mongo Shell with the same creds, it was fine.

The problem? I was not specifying the database in the connection string, which means that even though the insert command was specifying the database, the connection wouldn't have authenticated the user from the resources database (I guess it would have tried to authenticate against admin or something).

Basically, instead of this:

mongodb://MongoUser:thepassword@127.0.0.1:27017/

it should have been this:

mongodb://MongoUser:thepassword@127.0.0.1:27017/resources

You can also use the MongoUrlBuilder, which allows you to set all the options you might need and allow it to create you the URL e.g.

var builder = new MongoUrlBuilder();
builder.Server = new MongoServerAddress("127.0.0.1:27017");
builder.Password = "thepassword";
builder.Username = "MongoUser";
builder.DatabaseName = "resources";
var url = builder.ToMongoUrl();
var client = new MongoClient(url);

Monday, 4 December 2017

IdentityServer 3 certificate loading woes!

TL;DR: Set Load User Profile to True on the Application Pool (advanced settings)

We have a cloud system that uses IdentityServer and loads signing certificates from pfx files in the App_Data folder - works great.

We've deployed an on-premises system based on the same code and it doesn't work. The following errors are logged:


  • ERROR Startup - Signing certificate has no private key or the private key is not accessible. Make sure the account running your application has access to the private key
Which is definitely not true. Same certs as production, same passwords, files definitely there and can be imported into the Cert store to prove it.

I ignored this initially but when we then log in over OpenID Connect, it gets to token signing and bang:

  • System.InvalidOperationException: IDX10614: AsymmetricSecurityKey.GetSignatureFormater( 'http://www.w3.org/2001/04/xmldsig-more#rsa-sha256' ) threw an exception.
  • SignatureAlgorithm: 'http://www.w3.org/2001/04/xmldsig-more#rsa-sha256', check to make sure the SignatureAlgorithm is supported.
  • Exception:'System.Security.Cryptography.CryptographicException: Invalid provider type specified.
  • If you only need to verify signatures the parameter 'willBeUseForSigning' should be false if the private key is not be available
Which is actually a pile of errors that are not actually related to the problem. I wasted a ton of time creating a different format of signing key (lots of forums say that .Net doesn't work with the Enhanced Security Provider), created some new certs directly into the local Cert store, exported them as pfx into the same location, changed the web.config to use these new keys but now the site won't load! Disable CustomErrors and now get this error:

  • System.Security.Cryptography.CryptographicException: The system cannot find the file specified.
Which is really weird since the new certs are in the same place. This is, however, a more useful error. I double-checked everything and it seemed weird until I realised that the error doesn't necessarily mean that the cert file itself is not found but it does something weird and tries to load the user profile to check certain parts of the certificate (which seems strange when loading from file but anyway...) I then found a useful blog post that says that the app pool doesn't load the user profile by default, which is why it doesn't work. IIS -> Application Pools -> Select -> Advanced Settings -> Load User Profile -> True

I enabled this and the site worked! I then reverted to the original certs and they worked too so this was simply IdentityServer covering up a random error or not providing a helpful enough error message. This also explains why it works on Azure, where the App Services system must simply enable Load User Profile by default.

Wednesday, 29 November 2017

NuGet package not visible in Manage NuGet Packages

I had published a package to a private NuGet repo and even though it was visible in one project, it was not visible in another. I looked on the server and it was definitely there.

This happened because the NuGet package was targeting .Net 4.6.1 but the project was only targeting 4.5.1 so it was simply not listed.

I updated the project to 4.6.1 and the package appeared!

Cannot install NuGet package even though it is listed

I had a package that I wanted to install listed in the NuGet packages list but after clicking install, it seemed to wait a bit and then do nothing.

HOWEVER, clicking back onto the output tab in Visual Studio (since it was unhelpfully changing to the Error List tab), I noticed that there was a constraint that didn't allow the installation. In my case, the Microsoft.Net.Compilers package was too old for the NuGet package to install.

Once I saw that, it was easy to install the NuGet package after updating the other one!