Friday, December 28, 2012

Podcast Reunited

I used to struggle to get iTunes to download podcasts properly. Sometimes they wouldn't download fully, or couldn't be resumed and had to be re-started (IIRC, might have just been other downloads that did this). So I would download them as MP3's from the shows' archive websites, e.g. .NET Rocks!.

There is a problem with this though. When you add the files to iTunes, they are picked up as normal music, and not podcasts. This is sorted out easily by going into the file info and changing the Media Type to Podcast. Simple.

What is not so simple is why there are now two separate podcast entries - one for the subscription we used before, and one for the episodes that were downloaded manually. I used to ignore this, and just have them all on one playlist, but recently when I was re-importing everything into iTunes on a re-installed PC, I noticed this again and wanted to solve the issue.

MP3 files come tagged with ID3 tags. The standard ones such as artist, album, track number, etc. are easily editted in most audio programs, including iTunes. But for podcasts, there are some extended ones, namely PODCAST, PODCASTURL, PODCASTID, etc., which I found out using a program called Mp3tag.

You can update these tags using Mp3tag, and probably other software. For more info on this, see the question I asked (and answered) on Superuser.

Sunday, November 11, 2012

Refactoring Databases

On Friday I did a presentation at Entelect's internal dev days regarding Modern Database Development, and how best practices of Agile Development are widely known and regarded in the community, but how we rarely see the same rigour with database development.

And it's not new. Martin Fowler, Scott Ambler and Pramod Sadalage have written books and blog posts on Evolutionary Database Design and Refactoring Databases, going back to at least 2003. But we don't always see people employing the techniques and tools that have been discussed in the past ten years.

I didn't even know about dbdeploy nor the Red Gate developer tools until this year. I've only been on one real life project that used a custom developed database patching tool to roll out database changes and a base for integration testing. But I definitely believe these tools and techniques must be employed more regularly.

I have to say, the presentation went well. I received a lot of positive comments from people and I think that the audience may have got something out of my talk, which is great news. In fact, I was even asked straight afterwards to help someone out in how to help them compare the data in two production databases.

Look up the tools I mentioned, dbdeploy, Red Gate Source Control, SQL Compare and SQL Data Compare, as well as Microsoft's SQL Server Data Tools. dbdeploy is open source, the Red Gate Developer Tools comes with a trial version, and SSDT is free if you have Visual Studio.

Let's all get better at what we do.

Cheers,

James

Wednesday, February 22, 2012

A look at ASP.NET MVC 4

I just watched the talk by Scott Guthrie at Techdays 2012 in the Netherlands entitled "A look at ASP.NET MVC 4", see the video at the bottom of this post if you want.

In the talk, Scott talks about some of the new features in ASP.NET 4, as well as touching on some new features of Entity Framework Code First. The highlights of the features are:

  • Database Migrations
  • Bundling/Minification Support
  • Web APIs
  • Mobile Web
  • Real Time Communication (SignalR)
  • Asynchronous Support using language features (async and await)

An extremely useful addition to EF Code First is that of database migrations, allowing you to progressively develop your code and database. Migrations allow you to deploy/rollback different versions of your database. Each migration can add/remove its parts to the database (e.g. adding a column when migrating up, or removing the column when migrating down, even perhaps extracting data to a temp table and processing it during a possibly destructive migration). The one project I was on had a whole custom written database patching/versioning framework which enabled true integration testing, as well as generation of deployment scripts, which EF Code First with migrations can probably provide out of the box now. Very nice.

The bundling and minification support is a welcome addition too. By convention, instead of referencing specific scripts or CSS, if you reference a folder, all the relevant resources in that folder will be bundled and processed together. An HTML helper is also available which also provides versioning of the bundles by appending a hash of the resources to the query string. Custom bundles can be defined and custom processors can be implemented as well, for example, in the talk Scott illustrates that you could use CoffeeScript and LESS processors in a bundle, greatly improving a web developers life.

The WCF Web API, is now part of ASP.NET and is now known as the ASP.NET Web API. It provides the power of WCF with the ease of ASP.NET MVC, while respecting the HTTP protocol a lot more. It provides built in support for writing code once and supporting multiple response types (JSON/XML), OData for querying, filtering and sorting data by just adding to the query string (no code change... as long as your code supports returning an IQueryable), and also provides a nicer programming model for HTTP responses.

The default MVC project templates come with CSS that uses media queries for a more adaptive feel to the application. And for scenarios where media queries aren't enough, there's also support for detecting a mobile client, and returning a completely different template or view for a request. This allows for creating a single web application, but catering for a wider range of clients.

The Real Time Communication with SignalR allows for the server to push through to the clients. SignalR can detect if the client supports WebSockets, or if not falls back to various methods that enable the expected behaviour, e.g. long polling.

The Async support just leverages existing asynchronous controller functionality, but allows you to do so using the async and await keywords that are part of the next version of .NET.

I highly suggest that you watch the whole video, as always, the Gu's talk is full of useful information:

Tuesday, February 21, 2012

Quality Software Requires Quality Processes and Commitment

A former colleague of mine, Deon Thomas, posed a question in his blog post: "Can 'agile' software environments, truly accommodate decent code reviews?". Deon asked me for my views, and I thought I would air them here, and give a full answer, instead of just leaving a simple comment :) Hope its not too long and isn't too off track :D I have some time off until I start at my next employer, Entelect Software, so I have lots of time to write :D And I tend to babble :B

First, to answer his question, I believe that every team that develops software can gain tremendous value from regular, detailed, code reviews. I also believe it can be successful in an "agile" environment.

I'll explain the very first project I was involved with as a professional developer and how the team ran, in my opinion, successfully in an "agile" environment.

My first project as a junior software developer was as part of a team of around 6-10 software developers (there was also a Business Analyst, Tester and Project Manager, as well as a DBA and other production support staff). The product was quite a mature online trading system, with a big and complex enough source base to keep the team busy (probably could have been a lot simpler though; there was quite a bit of spaghetti code, and a few little intricacies in the long ~15 minute build process, not to mention it was still stuck on Java 1.4 when Java 1.6 had already been out for a while).

Anyways, as a developer, you would have a few different responsibilities: development of new features, bug fixing and maintenance, production support, code review, and a kind of regression testing - you would test the software manually whenever you added a feature or made a change, to make sure you didn't break anything (there weren't many unit/integration tests, and those that were there hadn't been run in years; the system wasn't very amenable to automated testing). There were multiple branches in development at any point in time - since there was only one tester, development speed tended to be much faster than testing speed, and so only so much could fit into a release.

The flow of development was pretty typical (and I think I can remember them all :D). You would be assigned a task on the ticket/bug/issue-tracking system, you would investigate it and assign estimates of how long you expected it to take. After it was discussed with the project manager and/or team lead, you would then know when it was destined for release, and which branch to work on.

You would perform your work on a certain branch, and once you were happy it was working correctly (testing it manually, and capturing screen shots of your testing...), you would commit your code (as often as possible, preferably small changes as they occurred, or daily, but IIRC the body of work would be quite large and so larger commits every few days or after a week or so could/did happen too).

You'd then attach to the ticket a document detailing what your change involved, your screenshots for the tester to peruse at a later point in time, and I think also the branch and revision number. The ticket would be assigned to someone by a senior member to another developer for code review, who would add their comments on the ticket (either in a separate document, or as comments, or in the aforementioned document... can't remember). Once you'd discussed what looks good or bad with the reviewer, and if you had to make any changes, you'd have a second round of code review to make sure the changes were applied correctly.

Since there were multiple concurrent branches, you'd also have to merge upstream (either before or after code review, can't remember), and sort out any merge conflicts (which were very common, because of the spaghetti code and shotgun surgery). The ticket state would be changed to ready for testing.

When the branch was deployed for testing, the tester would perform regression/system testing, and when successful it would go through a User Acceptance Testing phase with the business users and the Business Analyst giving their stamps of approval before it would be readied for release.

A developer would have had to compile a deployment procedure document, and prepare the release artifacts. This would be reviewed again by another team member (usually a senior member), and then it would be handed over to the production support team. If the deployment was a tricky one, the developer that compiled the deployment procedure notes would also sit in on the deployment, or just be on stand by from home (the senior member would also be a backup stand by support person).

So, that's quite a detailed explanation of the project's development process, at least from my view. There would probably also be various planning and prioritising meetings that I wasn't party to. But you get the idea.

I think one of the best parts of the project was that a developer would experience various roles in the development process: development of new features, maintenance involving bug fixing and refactoring, code review, merges, compilation of deployment documentation, deployments, test environment and production support (two people would be in the support role per week or every two weeks, using a round-robin type of assignment, to help production support with any issues as top priority, and to also help the tester with her environment).

Alright, to get back onto Deon's question, I think that this project worked in an "agile" way. New features and bug fixes were developed in an iterative fashion. Tasks were prioritized and assigned to be performed for a certain branch/release (or, iteration), and only important pieces of work were performed first; if it wasn't important to the business it would be left until future (or, put in the backlog).

The development pipeline was controlled using the issue tracking system, and each person in the team knew the correct workflow of the tasks: open > investigation > development > review > testing > UAT > closed. If any of those phases in the workflow had problems, another iteration of work for the previous phases would be performed, e.g. if the code review found problems, more development would occur before another review was done; only once the code review passed would it be merged and eventually released into regression/systems testing when necessary, and if a problem occurred during testing, the piece of work would probably require more development, review and then re-testing before it was sent to UAT.

This spiral model of development worked very well, and produced high quality software in my opinion. It worked also because each person on the team was highly capable of performing each of the tasks they were responsible for at the various stages in the project. If they didn't have the know how, they were taught how to (and it was documented/updated on the project's wiki). Proper prioritization was performed; estimation of tasks allowed for realistic goals for a release, and a manageable set of work.

Now, I think another reason that it worked was because there were quite a few developers. The work could be spread out so that one person wasn't bogged down all the time - although, the more senior staff would be assigned quite a lot of work, requiring delegation if they did get bogged down.

I think the problem comes in with smaller teams, where you tend to always have a big list of tasks to do. Instead of focusing on improving quality, quality starts slipping as you try to deliver more and more features being asked from the business, with either unrealistic or highly demanding deadlines. This is also made worse when some of the resources on the team have to multi-task between different projects, or are stuck in meetings constantly: this impacts the project even more when those bogged down resources are required by other resources on the project, and then there's so much contention that people aren't as productive as they could be.

Trying to be more productive, the team will start to forego unit/integration tests, or do some testing but only look at one case (the success case, ignoring the multiple failure cases). Code reviews will be done less often, or in less detail, or not at all. And so on: less quality assessment procedures or measures will be performed.

One way of mitigating this is to enforce the workflow, and make it a team wide rule that everyone knows and follows, e.g. "No code fixing a bug will be committed unless a unit/integration test is also committed reproducing the bug. No code will be merged until all tests pass. No merge will be done unless all commits have been code reviewed." If the transitions between the various states of a task are done by the issue tracking system, and it can verify that a transition is allowable, then even better.

Another way of avoiding this situation, is to improve planning by planning a truly manageable amount of work for the iteration. Ensuring that there is time allocated to unit testing and code review (whether as separate items with their own estimates and measures of time spent, or included in an aggregate estimate and progress measure), and that this is understood by the project manager, and handled by the PM if the client is asking why they aren't getting more and more features - because the time is being spent on bringing less features with more quality. In the end, if you're constantly bringing out quality code, you're spending time on enforcing quality measures instead of wasting time fixing bugs that shouldn't exist in the first place.

Finally, another great way of learning how a system works, and to learn its conventions, is to review the code. Sometimes even reviewing small code changes in commits can be much more enlightening than trawling through reams of code. It could also be used to keep track of the team's progress, and enhance a daily/weekly stand-up/progress-meeting.

Wednesday, February 1, 2012

Symbolic links and hard links in Windows

A lot of people coming from the Linux world would probably have at some point used symbolic links or soft links, and hard links.

Windows has had shortcuts for a long time, but these are actually files that are used to point to other files, they aren't actually entries in the filesystem that merely reference other files, which is what the links are, meaning that shortcuts take up more space.

A lesser known fact about Windows is that it actually also has symbolic links, hard links, and also with Windows NTFS has the concept of a junction point, which is a hard link for a directory. The differences between them is discussed quite nicely in the post "Windows File Junctions, Symbolic Links and Hard Links".

It also appears that the capability has been around for a while, going back to Windows 2000. With Windows 7 and probably most newer Windows versions these days, there's also a nice command line tool mklink to help with this.

Open up a command line, type mklink /? and then try it out for yourself in a test area, playing with creating new files, linking to them, deleting the original files, seeing how the links are affected, etc.

Disclaimer: If you delete something that wasn't just a pointer to a file, and was the actual file, that's your own fault, don't come looking for me :D