Playing with analytics - Exceptions by page

Playing with analytics - Exceptions by page

Its easy to join tables such as pageViews and exceptions, which will show the amount of exceptions that have occured, on a certain page view, which means you can focus efforts to reduce the amount of unhandled exceptions on pages which get more traffic

requests
| where timestamp > ago(1d)
| join kind=leftouter (
    exceptions
    | where timestamp > ago(1d)
) on operation_Id
| where success == "False"
| summarize exceptionCount=count() by tostring(parseurl(url)["Path"])
| order by exceptionCount desc
| render piechart 


Playing with analytics - Website flows

Playing with analytics - Website flows

I've been playing around with azure app insights analytics again. I focused on seeing if there is a way that i could work out a users journey through a website, using simple query. I know this is easy to do with Google analytics, but I thought I would see if I could do something similar with azure app insights offering.

Turns out its quite simple, and you can see the journey they took, on one device

Mob Programming

Mob Programming


Last night I attended a session on Mob Programming and Working well together, with guest speaker Woody Zuill, who claims to be one of the founders of the practice. The session was at the coop digital building at Federation House in Manchester city centre.


I attended with a few work friends straight after work, and was luckily only a few minutes walk away.
The evening started with networking, and people just getting to know each other before the main talk with a few beers. Woody talked his experience in the industry, and how mob programming came about, and how much more work  an be achieved working in this way, although the velocity cannot be measured so its difficult to prove. It made me realise the small amount of mobbing I have done so far in my career, is probably not as structured as it should be, and certainly not what Woody would consider mobbing!


He showed photos from all around the world, with companies he has converted to use Mob Programming, some interesting photos of companies with 3x80inch screens, looked a bit scary!
He pointed us to a github page where you can run a client that will tell you when the next person is up for being the driver, but this feels a bit too strict in my opinion.


Woody really sold it to us though, the benefits such as the whole squad constantly being up to date with the latest changes, less meetings, quicker answers, shorter feedback loops, squad engagement higher etc. One thing I thought about was that mobbing full on all the time must be really draining, mentally but he summed it up really well with this statement:
Relaxed, Sustainable
Be prepared to contribute
The right thing
At the right time
In the right way

He also spoke about how to work well together, and the basis of this is built with individuals and interactions, kindness, consideration and respect, which seems kind of obvious but it's a nice reference point. He mentioned that being too loud and overpowering takes the light off quieter individuals, and when mobbing or working in a squad, it's important to get the balance right with listening, not over thinking, including quieter members of the squad and having consideration and respect for the other squad members.

I managed to grab some of the slides, which is a good read and took away a quote he left us with which I found interesting

"The value of anothers' experience, is to give us hope. Not to tell us how, or whether to proceed"

Find a good client for Azure Table storage

Find a good client for Azure Table storage

Today I played around with Azure Table storage.






A service I was looking at uses a log4net appender to dump all its logging information, straight into azure table storage. (Log4NetAzureExtensions.Appenders.AzureTableStorageAppender) It has months (maybe years) of data, so running a query was going to take a looooooooooong time.

I must say I've never had experience of using Azure Table storage before, so was unfamiliar with the syntax used for querying results, it looks a bit like powershell but i couldn't seem to limit my result set. I knew that azure table storage is a nosql document store but that's as far as it goes.

Visual studio allows you to connect straight to Azure Table storage, but the UI is really bad, and the query builder even worse. I wanted something quick, where I wouldn't have to spend ages reading some tutorial before i could get something out of it. Guessing at the query language wasn't really getting me far, and each time there was a syntax error, it would wait about 30 seconds before returning an error. 

I did a quick google search, and came across Azure Storage Explorer. The UI was better, but the query builder still seemed pretty poor. It was slow, and didn't really offer anything more than the Visual Studio UI did,

I then found out that Linqpad has support for Azure tables via a plugin.
This meant I could use LINQ to azure table storage. It's pretty fast, and meant it would check my syntax before I ran a query, I really like it!!

If you are stuck for something similar, I most certainly recommend this for quick snippets against Azure Table storage!

How many levels of indirection before we consider it too many?

How many levels of indirection before we consider it too many?

So I had a scenario today where I had to look through some code written by another team. Unfortunately everyone who had worked on the code in the past had left. The code was well written, the problem was with the amount of levels of indirection.

Don't get me wrong, I like well formatted code, with strong design patterns and solid principles. Indirection is great for testability, but you can definitely take it too far!

The problem I faced was a "URL Helper", whose job was to return a url based on a key.

No biggie.

The key was a variable, retrieved from a service called "ResourceHelper", which is a service that returns a string based on an id.

Annoying,  but still no biggie. I guess the key could change?

The url was being passed into a generic Api caller, a few parameter strings were being passed in, but nothing to really give the game away.

More annoying, as I don't know what's being called,  but maybe the request contract changes frequently?

The generic Api caller returned a dynamic object, of which a value resultcode was being extracted.

This is starting to get a bit unnecessary now,  but at least I'm kind of following it? Does this mean there is no response contract or, maybe the implementation of the service contract could change?



I followed what happened after the call to the service for a clue, but the page then redirected to the same page,  but appending the result code to query string.

I gave up. I stopped and tried to work out why someone would do this, or whether they are just messing with my brain!

So the page is calling a service, of which I don't know the url, by pulling the url back from somewhere using a key that could change or at least I don't know right now, to return a value that's called resultcode that doesn't have a contract,  only to reload the page?

I was annoyed but carried on anyway. I realised the url helper was getting the url from the config and found a key that vaguely matched the variable name in code. The url was something like http://stubservice?return=1

Again no clue.

I questioned why would someone create such level of indirection? What's the point? What are they trying to achieve? Who is it benefiting? Are they purposefully trying to obfuscate the code? Why me? What were they thinking? Were they having a breakdown? Is this deception?

I ended up finding out the solution in the end, by looking at the environment configuration in an integrated environment,  and looking at the url it had in config.  Completely uncessary. I hastly refactored the code, adding comments where I couldn't refactor, praying I'd never have to look at code like this again.


Powershell Selectors and Iterators Cheatsheet

Powershell Selectors and Iterators Cheatsheet

I've used powershell quite a lot, I find it most useful at work when setting up build pipelines for team city, its really handy when creating build steps and you need team city to execute some custom functionality e.g. specrun. It's also good when trying to do some analysis on particular installed nuget packages. There are many uses, but these are main uses.



I find the syntax quite easy, I occasionally get caught out with projections, filtering and iterating, so I thought I'd write a quite cheat sheet post that may help others out there!

Selectors

Select-Object is similar to linq select, it can be used for:

  • selecting specific object, or ranges from a collection of objects that contain specific properties
  • selecting a given amount of elements from the beginning or end of a collection of objects
  • selecting specific properties from objects

There are more things we can do with selectors, but this is just what I'll be covering.

To project a collection into a selector, you need to pipe it into the select-object cmdlet. You do this using the pipe character. e.g.

Get-Process | Select-Object

You can then specify options to filter what we want


e.g.
We can select items from the array, but only elements 0,2,4 and 6




Or we can select the first 5 elements




Or we can can select all elements after the first 65




We can also create and rename an objects property, which can be useful when when the property name is not too descriptive and when we are passing from one cmdlet to another, and where the next cmdlet accepts and processes objects by Property Name.

The -property argument accepts a collection of objects, where you set the name and expression. The name is the name of the property, and expression is a script, whose result is the value.

Here is an example where we take a process from get-process cmdlet, and extract three of the properties, ProcessId, Name and CPU Amount.

Get-Process | Select-Object -Property @{name = 'ProcessId'; expression = {$_.id}},@{nam
e = 'Name'; expression = {$_.ProcessName}},@{name='CPU amount';expression = {$_.CPU }} -First 5


Be mindful when using this, as when we select property names, it actually generates a new object of the same type with only those properties that we selected and strips out the rest. This may mean we remove a property, without realising, that we may need further down the chain.


Iterating over Objects
Iterating allows us to perform an action on each element within a collection. There are ways of doing this using a for loop, or a while look, but we can also use a foreach loop (like in the examples below).

The ForEach-Object cmdlet takes a stream of objects from the pipeline and processes each and it uses less memory do to garbage control, as objects gets processed and they are passed thru the pipeline they get removed from memory.

The cmdlet takes 4 main parameters:
Begin <Script block> executed before processing all objects
Process <Script block> executed per each object being processed
End <Script block > to be executed after all objects have been processing all objects.


To skip to the next object to be process in ForEach-Object the keyword Continue is used.
For exiting the loop inside of a ForEach-Object the break keyword is used.


Powershell Comparators Cheatsheet

Powershell Comparators Cheatsheet

I've used powershell quite a lot, I find it most useful at work when setting up build pipelines for team city, its really handy when creating build steps and you need team city to execute some custom functionality e.g. specrun. It's also good when trying to do some analysis on particular installed nuget packages. There are many uses, but these are main uses.




I find the syntax quite easy, I occasionally get caught out with projections, filtering and iterating, so I thought I'd write a quite cheat sheet post that may help others out there!

Comparison Operators:

-eq Equal

-ne Not Equal

-gt Greater Than

-lt Less Than

-le Less Than or Equal


-ge Greater Than or Equal


*Powershell is case insensitive so for case comparison you must precede the operator with c.
An example of this is


*Powershell also does not care that the type matches, for example it will try and convert an integer within a string into an integer before comparison. An example of this is

Powershell also has the ability to compare collections and types without having to reference any external modules. Here are some more examples:


-contains -notcontains Checks a collection of elements for a specific element

-in -notin Checks whether a collection contains a specific element


-like -notlike Used for string wildcard comparison


-match Used for regex matching


We should next look at boolean operations, which can be combined with any of the above to create more complex statements


-and Both comparisons must be equal to return true

-or At least one of the comparisons must be equal to return true

-not Negates the comparison

-xor Exclusive or returns true if one part of the expression is true, but false if both are true

Knowing all the theory of this is fine, I guess what you want to see is some real world examples. I have provided some below.
In my next post I'll be creating a cheatsheet for Selectors, Filters and Iterators, which in my opinion is much more interesting!

Some real world examples:

Gets all the powershell commands that contain the word "Process"

Gets all the processes, where the process name matches svchost

Get all the directories where the name matches either "Desktop" or "Documents"


Powershell
FizzBuzz code kata in F#

FizzBuzz code kata in F#

https://github.com/craiggoddenpayne/FizzBuzz-FSharp

It's been a while since I got to use a functional language, I did a lot of training on scala, but never really used F#, so I gave it a go with the FizzBuzz kata.


The rules for FizzBuzz are simple:
Write a program that prints the numbers from 1 to 100. But for multiples of three print "Fizz" instead of the number and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz".

Its a nice easy challenge when writing in an unfamiliar language, especially when trying to do things properly (TDD)

Have a look at my attempt (and also my history to see how I got there!)
Github: https://github.com/craiggoddenpayne/FizzBuzz-FSharp

If you've attempted this kata, or want to try it in a language you aren't so familiar with, send me the link, I'd be really interested to red up on it!




Spec Flow / Spec Run, Smoke Tests, Transforms and Concurrency issues

Spec Flow / Spec Run, Smoke Tests, Transforms and Concurrency issues

So I came across a problem today with SpecRun and concurrency issues.



It started after I wrote a suite of end to end smoke tests, that are to be run on our various testing environments to make sure we get some quick feedback after a deployment using team city.

I used SpecRun to execute my SpecFlow tests using Team City. I was using the transform stuff built into SpecRun to make sure that my configurations were all in tact, per environment. Everything looked good.

*I hadn't used SpecRun whilst developing the tests (as I was running my tests using NUnit and Resharpers Unit Test Runner).*



So I ran the tests, and after first few test runs all the tests passed, but then I started to get concurrency issues and access violations when the tests run. Lots of red error messages on my screen, the only feedback from SpecRun was mentioning that it couldn't access a file .config,bak

This was a bit weird as I was using SpecRun's build in configuration transformer, and everything so far just worked! I tried recreating the script that ran SpecRun on team city locally, and the same thing happened, occasionally I get the access violations. Weird.



I browsed the documentation and no where could I find anything about this, and after a quick google search, I managed to find another user with the exact same issue. The problem was the solution was on YouTube, and any deemed social media-ry website is blocked. So I loaded up the site on my phone and managed to reuse the fix.

So it turns out there's a secret node, not in the XSD I was given for SpecRun (The node called RelocateConfigurationFile) . It instructs SpecRun to rename the config file prior to running the tests. Since you can combine this with the TestThreadId, you can have multiple scenarios being run concurrently as each "session" will have a separate config file.



Here is the code I used to fix my concurrent issues. Hope this helps someone else in the future!




Here is some more information on the node in question: https://github.com/gasparnagy/berp/blob/master/examples/gherkin/feature_files/RelocateConfigurationFile.feature