Articles Community Entrepreneurship Software

How to Win a Hackathon




On minimal sleep.

What is a hackathon?

A hackathon is a set time frame, usually 24 hours, where teams build something. They can either be for fun or competitive. For-fun or non-competitive hackathons often revolve around flexing creative muscles while building community. Many can even be for the benefit of non-profits or to contribute to open-source software. Organizations like GiveCamp and Startup Weekend help organize large hackathons worldwide and make it easy for developers to get involved. (Running a GiveCamp is a great way to flex your leadership skills).

At the end of the time limit, they present their creations to an audience. Depending on the types of hackathons, the winners can take home fantastic prizes, like funding for their idea, credits toward services, or cold, hard cash.

The primary goals are to get people out of their comfort zones, try new things, and build something incredible within a set of constraints. You’ll come to notice that not all hackathons are created equal, so understanding the context of the one you are entering is of paramount importance.

There is a formula

What if I told you there’s a formula for winning hackathons?

While every hackathon is different, and the strategy in how you prepare, build, and present is different. In this article, the definition of winning is taking home first place or a top prize, but there are other incredible benefits of hackathon participation that can be equally important. You can "win" by becoming better as a developer or entrepreneur. You can "win" by networking with others. Keep that in mind as we start with the first step – preparing for the hackathon.

Know the rules

The first and the most non-negotiable rule of winning a hackathon is to know the rules. You don’t have a shot if you are disqualified from the competition or fail to meet the requirements. If the rules state that you must use a specific service or technology, you must use that technology. If there is a requirement for a certain number of teammates, then meet those requirements. This leads to the next softer rule of knowing the audience.

Know the audience

Knowing the audience helps give your team and your product focus. If the hackathon is based around high innovation, then a to-do like application will likely not win. If the theme is "hack for non-profits," then a robotics project based on drones will probably not help if the non-profits present are lower-tech office workers who just need a new website. Keep in mind who is attending, the hackathon theme, and what the people attending are looking for. This is particularly important for winning the "people’s choice" awards where the audience votes.

Know the judges

The win and the loss are ultimately in the hands of the judges. When planning your idea, it’s crucial to know who the judges will be, their industries, and their background. Each judge will bring with them biases, personal interests, and philosophies of what a winning project will look like.

An angel investor is likely highly focused on ROI and innovation. What would the market look like, and how much could this potentially be worth?

A judge with a marketing or sales background will likely be excited by things that show well.

If a particular judge has a clear stance against the continued increase in mass surveillance technology will likely not vote for you if your project requires a camera in their bedroom.

The key here is to find hard-negative biases to avoid while looking for a project idea that hits the excitement button in some way for each judge.

The Team and The Idea

"If you want to go fast, go alone. If you want to go far, go together." is a quote that applies so well during hackathons. When selecting a team, the main tradeoff is that the fewer people you have, the faster you can pivot and move, while each additional person adds communication overhead. Additionally, you want to balance something that is both ambitious and doable. Understaff an idea, and you won’t complete it. Overstaff a plan, and it leaves someone sitting on their hands at best, or at worst, will slow you down.

It’s best to keep the team small, particularly before you solidify an idea. Two or three people total is best. These key individuals are your "founders." They are both idea machines AND can execute. If the work for the idea can be evenly distributed between the three, then continue with the team.

If the idea is ambitious enough to need more people, then you’re at your first crossroad to "go far." You must stop and determine if the execution you are planning can support more people and decide where the "snap points" or interfaces will be.

As developers, we’re familiar with the concept of an interface. It’s a contract between two types – a bridge that allows developers to build pieces of a program, totally isolated from each other, and have some semblance of a guarantee that the two will work together. This concept and construct is the key to building larger projects in a shorter amount of time. The most effective software leaders know how to use interfaces to effectively divide and parallelize work, and it’s a vital skill to have if you intend to ramp team size.

In one hackathon, my team won by building a universal search for an existing application. It was similar to how the Windows key works on Windows or Command + Space works on Mac OS. We started with a team of 3, where we created an interface for services to search different parts of the program. One person built a service for the navigation and menus, and another the user search service, and a third built the UI. This division of work allowed us to agree on a common language between the services and, once complete, gave us the unhindered freedom to work in parallel for the rest of the time. Once we realized this, we recruited two more people to help build other services. Since an interface was in place, we were able to parallelize that work as well.

Look for ideas that have parts that can be broken off. This gives you the power to be more efficient in building a real demo of your design or product and ultimately gives you the edge over the competition whose idea is less complete. Ideas are cheap. Execution is everything.

The Execution

The execution.

The demo.

The prototype.

This is where the fun begins. It can generally be a hectic segment. If you did your due diligence in planning and team building beforehand, your goal should be clear. Break up the work and start building. Pivots and questions are sometimes inevitable. If you see that you are running into unforeseen issues, sometimes it’s best to pivot in a new direction, or at least mock a particular service or part and note the assumption.

Some hackathons, such as Startup Weekend, have designated business and technical mentors. Keep in mind any resources available to you during the hackathon and take advantage of them if appropriate (my #1 tip would be any free snacks, of course).

For picking technologies to use, some people try new tech to learn. If your intent is to win, use safe and known languages and technologies for core parts of your prototype. Only use new or unknown tech for required pieces that relate to hackathon rules, if possible. If rules say no code should be written until the competition starts (most do), then think about the code beforehand. What might it look like? Many times you imagine what structures and integrations will generally look like before you code, so run things through in your head to ensure a clean implementation.

The Presentation

The final presentation is critical. It can make up for execution missteps, and it can lose the competition for an incredibly well-built prototype.

First, choose the best presenter on the team, who is most comfortable on stage. This isn’t always the leader.

Next, craft the story. Think of the judges and the audience. What ideas are they interested in, and what stories can you tell during the demo that they will connect with? Keep in mind the original goal of the hackathon. If it’s a startup or business-focused hackathon, then be sure to include all the essential numbers. Include data about the market, ROI, relevant competitors, etc.

Finally, be sure to nail the ending. Have a final slide or come back to the most important questions and remind the judges and audience of what they just saw. Why is this the problem to solve? Why now? Why is this the right solution and not something else, or nothing at all (many times in business we aren’t fighting against competitors, we’re fighting against customers or people doing nothing)?

In Conclusion

Hackathons are incredibly rewarding and pack incredible amounts of experience and learning into a small amount of time. I recommend everyone participating in one at some point in their career (the earlier, the better). Have you participated in a hackathon? If so, what helped you get the edge over the competition? Or, what did you do that you might do differently in the future? Leave a note in the comments and let me know.

Happy hacking!

Articles Featured Software

Synchronize Node.JS Install Version with Visual Studio

Visual Studio is a fantastic IDE (and free for individual use)! It’s been so great that they’ve added modern web dev support in recent years for things like React, Webpack, and more. In addition, Visual Studio’s installer has an option to install Node.js as part of its regular installation in order to support the npm task runners that are built in. However, I ran into an issue I updated Node.js outside of Visual Studio , but since Visual Studio uses its own install that is separate from any outside installation, you can potentially run into a node_modules package dependency issue where one version of npm installs a package (which makes it rely on that version of Node/npm), and then you can’t run commands in the other version (they break).

Specifically, I had an issue with node-sass and windows bindings. The solution was to point Visual Studio to the version of Node.JS that I had already set up externally to Visual Studio. Here’s how to synchronize them:

Step 1: Find the Node.js install location

First, find the Node.js installation that you use on the command line. By default, Node.js installs to C:\Program Files\nodejs. Unless you picked a custom installation directory when you initially installed Node.js, this is likely the path you’ll use.

Once you’ve found the installation directory, copy that directory path to your clipboard for a future step.

Step 2: Configure Visual Studio

Let’s setup up Visual Studio to point to the real Node.js

Note: These instructions work on Visual Studio 2015, 2017, 2019, and probably future versions as well.

To configure Visual Studio to use a different version of Node.js, first open Visual Studio and navigate to Tools > Options.

In this dialog, go to Projects and Solutions > External Web Tools to open the dialog that manages all of the 3rd party tools used within Visual Studio. This is where Visual Studio is configured to point to a specific version of Node.js.

Add an entry at the top to the path to the Node.js directory to force Visual Studio to use that version instead.


Using the Windows PATH Instead

If you’re making this change, you probably notice that you can also move the $(PATH) option up to tell Visual Studio to look at the PATH environment variable to determine where it should look for node or other command line tools. This is probably what you want globally if you’re someone who is comfortable with the command line and understand the implications.

In general it seems that Visual Studio tries to isolate itself and the tools it uses from both other installs of Visual Studio and anything else that may cause issue with it. In general, Visual Studio developers have not traditionally had to touch the command line much, so this makes sense historically, but modern web applications require a different approach.

Managing switching the Node.js Version using Powershell

Another issue that might arise due to using the $(PATH) is that you might have projects that use different versions of Node.js. For example one might use Node.js 8.x.x, while another uses 10.x.x. In this case you might want to use Node Version Manager, or "NVM" as it’s called, to switch versions on the command line in powershell.

You can use NVM to switch versions of Node.js yourself on the command line, or use it to read the package.json file’s "Engine" property, and sets the appropriate version. If the version isn’t available it can even silently install the appropriate Node.js version. This is helpful in many situations, and can be included in an MSBuild task if needed to swap the version when you hit the "Play" button to the correct version, or as a step in your CI/CD build process.

Wrapping Up

And that’s it! Now you’re all synced up! Having two separate installs is really confusing. If you’re starting out with JUST the VS Node.js version, you’ll eventually come to a point where you may update node.js by installing it outside VS, causing it to get out of sync anyway. If you’re a veteran to Node.js, then you’re already using Node outside of Visual Studio and will need to do this anyway. It seems like there should be better documentation or indicators to show what version VS is using so this is more apparent.

Hope that helped. Did this fix it for you? Do you have a better way of keeping this in sync or a plugin/tool to help out? Let us know in the comments!

Articles Software

How to run Docker Windows Containers with McAfee Endpoint Security

McAfee Endpoint Security and I have a love/hate relationship in that I hate it when it gets in my way and love it when it’s not installed. In general, I appreciate security and security research, but recently I had been trying out (or attempting to try out) Docker and Kubernetes for a project I’m working on. It’s a .NET 4.6 web application, and as such, requires Windows Server Core (as opposed to the much lighter-weight, new Windows Nano Server or a Linux-based container).  The fact that we now have an option to effectively containerize any application, including a windows application, is incredible. So yesterday, I decided to try and set it up. Here’s how it went.

What OS to target with .NET Containers

The only potential downside that I had heard about Windows Server Core containers were that they were pretty big. When I say big I mean like real big. 10GB big.

Ok, fine, that’s…that’s not good but there’s no other way to run full framework .NET apps without it a WindowsServerCore container, so I can get past that.

Installing Docker and gotchas

Installing Docker on Windows with McAfee is pretty straightforward. This was the first time I’d installed it in a couple years, so I was pleasantly surprised when the entire thing was pretty seamless. The wizard created the necessary docker-users security group and added me to it and has a pretty nice interface now. There were a few initial setup steps that were unclear I had to lookup, though:

  1. You need to add your user to the Hyper-V Administrators group. For whatever reason, the Docker for Windows installation created the docker-users group and added me there, but didn’t to the existing Hyper-V Administrators group. 🤷


  2. When building a docker container via the command line interface (Docker CLI), the login information is different from what you use to log into the website.  The website accepts you username OR your email address (I use my email address), but the CLI will only accept your username, and not the email address.  It also doesn’t tell you why the login fails, so just get in a habit of using your username.

Issues building a docker image with McAfee Endpoint Security installed

Everything seemed to be going as well as I could expect until it came time to run docker build on my dockerfile and get everything set up.  It seemed to be pulling things from the registry OK, but then I was met with this error message:

C:\WINDOWS\system32> docker pull microsoft/windowsservercore
Using default tag: latest
latest: Pulling from microsoft/windowsservercore
9c7f9c7d9bc2: Extracting [==================================================>] 3.738 GB/3.738 GB
d33fff6043a1: Download complete
failed to register layer: rename C:\ProgramData\Docker\image\windowsfilter\layerdb\tmp\write-set-925881297 C:\ProgramData\Docker\image\windowsfilter\layerdb\sha256\3fd27ecef6a323f5ea7f3fde1f7b87a2dbfb1afa797f88fd7d20e8dbdc856f67: Access is

Ok, so let’s look at what’s happening here.  It’s saying it can’t register a layer because access is denied on my computer’s file system…but I’m administrator! What could possibly be the issue? Then I remember. McAfee. A quick Google search showed a lot of other folks having the same kind of issue and ends up that McAfee doesn’t even support Windows containers. Yea, that’s right.

McAfee Endpoint Security Does Not Support Windows-based Docker Containers!

After tons of research, the answer from McAfee itself (according to a knowledge base article), is that McAfee Endpoint Security does not support Windows Docker containers! It seems the main issues were:

  • DNS does not resolve
  • Performance issues with containers
  • Slowness in opening containers
  • Firewall/NAT issues

To take a look for yourself, here’s the original KB article on Windows Docker containers not being supported by McAfee.

How can I run Windows Docker Containers on Windows if McAfee doesn’t support it?

There are actually a few workarounds that have been successful for me and my team:

Virtual Machines

First, you can use the docker tools inside a separate “real” virtual machine (using VMWare, VirtualBox, etc) and actually perform the work there. This gets you around McAfee being on the VM directly, but also has a few drawbacks. It will slow your workflow down as you’ll have the VM to jump through from your local machine to the docker tools and images you may be running. This is not ideal, but might give you some runway until McAfee can add support.

.NET Core!

McAfee says that Linux-based containers are OK. This means if you’re using a language or tools that require Windows containers, you may be able to move to something that doesn’t require them.

I know this isn’t much help, particularly if you have a large application already that you’re moving to Docker, but can be an option if you’re just starting out or if you are able to adjust your stack enough to be able to drop the Windows requirement.

Remove McAfee?

Move away from McAfee if Windows Docker containers are a hard requirement for you and no other options work well for your team.

The bottom line

Basically, McAfee is really tough to work around as a developer, and Windows Docker containers are no exception. You have a limited options in the meantime, but I feel like the best recommendation is to not waste your time with workarounds and wait it out. McAfee should have an update at some point in the future to fix the issue. If you can’t afford to wait and the workarounds aren’t options, remember that making your voice heard on Twitter and Email and letting McAfee know this is important will go a long way in having them increase this as a priority.

Articles Software

Azure Functions: Developer Infrastructure is the Sweet Spot

I recently gave a talk at TriDev about Azure Functions, the serverless programming product from Microsoft. Azure Functions is basically functions as a service. You can basically write a single function in JavaScript or C# and it manages the entire infrastructure around it. You don’t need to worry about scaling, networking, load balancing, even containers. In my time evaluating it, one of the things I found was that the best and easiest application for easing into using Azure Functions is as part of your continuous delivery infrastructure.  A lot of times it’s scary to try new cloud services, or at minimum get experience with these new technologies in a “real” setting. Azure functions are perfect for getting developer utilities into production without having to manage our own server.

Here are the slides from my talk.

Articles Community Software

Building Great Software Teams

Over the last decade I’ve worked with dozens of teams helping them work better as a team and individuals through mentoring and helping build infrastructure.  I recently gave a presentation that used Maslow’s Hierarchy of Needs and compared that to how you can build great developers on your team. Take a look at the slides below and get in touch if you’d like me to give this talk at your local meetup or conference!

Articles Featured Software

The Illusion of Code Quality

Code quality is something that I believe every developer strives for.  We want code to be the best it can be and there are tons of opinions on things developers can do to make quality high. Over the years as teams have moved to Agile from Waterfall, and as build and test automation has become better, a lot of the code quality metrics that experts have developed are becoming less helpful, or, dare I say, counter-productive.

The larger a team gets, but more importantly, the higher turnover gets (developers leaving the team/company and new developers without context come on to the team/are hired), the harder it is for code to remain high quality over time. We’re all human, we can’t keep everything in our head, we can’t mind-read the original developer who left the company and wrote this code. The worst part is that we don’t know what we don’t know. We duplicate effort because we didn’t know there was a design document on Box, or we don’t go update it.  There’s also setup information on the Wiki that should be changed, but we’ve not asked anyone where it is yet, because we didn’t know to, so our sweeping changes to the project aren’t reflected there.  We’re also on a deadline, and there were already existing comments that StyleCop saw, and it can’t automatically tell me that something in my code is now out of sync with the comments, so now developers can’t make any assumptions about the comments being right.  Any of this sound familiar?

That’s ok, though! That’s human nature! We aren’t computers (I’m glad we’re not) and we’re not good at keeping documentation in sync, especially when most teams now use Agile (I say that loosely), but still carry assumptions over from the Waterfall days that are literally duplicated with automation today.  So what should we do to help keep down the illusion of quality and actually introduce REAL quality into our code?

Articles Software

Azure Bot Service and Why You Should Check It Out

At Build last year (2018) I had some free time and dropped into a chatbot presentation. I’ll be honest, I didn’t really care much about chatbots. I use them a bit in slack, but honestly, they aren’t overly helpful. “/giphy nick cage” will find a gif for me, but it’s usually faster to Google things, particularly when I’m at a keyboard. Skype and facebook bots, in general, have always felt more like a choose your own adventure book than typing to an actual person and, again, there are usually faster UIs for finding information like that. So why is Microsoft’s service for building chatbot’s so compelling to me now? I’ll give it to you with one word:

Articles Software

JavaScript is Crazy! Code This, Not That!

Below are the slides from my recent talk at TriJS titled “Code this, not that”, on JavaScript and replacements for some of the ways we solve problems in JavaScript, and alternatives that are better and less error-prone. It’s also got a few interesting bits of knowledge that’ll definitely surprise you, like the truth table…let’s just say JS gets a little crazy.

Articles Software

Announcing Ember.JS and Broccoli.JS Task Runner Extension for Visual Studio

Visual Studio has an amazing task runner that lets you integrate run task-based command-line tools into VS’s build system.  This means you can list commands and even set them to run with builds right inside Visual Studio without even touching the command line!  This is great for getting your team using these command-line tools while taking baby steps if your team isn’t comfortable with the command line yet.  The Broccoli Task Runner adds support for both Broccoli files as well as EmberCLI files, which means all your Ember.JS apps now have full support in Visual Studio!

You can download the tools in the Visual Studio Gallery or contribute to it (since it’s open source) on Github.

Broccoli task runner example
Articles Software

How the Xbox One lost me, and then won me back with 24-Hour DRM and the cloud.

I love my Xbox 360.  Or, 360s, I should say.  I’ve had 4 over the last 8 years with some dying and some traded in for newer models.   I’m an Xbox fan, but mostly, I’m a fan of technology and progress (and my PS3, too).  I love console release years because of all the new upgrades, and especially the graphics.  This is the first real year where we’ve had innovation in the online space by everyone, and it’s very exciting.  This week I was appalled by the 24 hour check-in.  I even tweeted that I’d cancel my preorder if they kept it.  I was serious. Here’s how Xbox won me back.