Current Articles | RSS Feed RSS Feed

Stop Burying your Important Tasks in your Backlog

Posted by Michael Chletsos on Tue, Jul 16, 2013
  
  

Have a large Backlog? Hate sorting the entire thing? Want to move fast and continuously? Then stop sorting your entire Backlog.  Just don't do it.

Instead, sort only the important tasks to the top of the Backlog, once those are done, or more appropriately, starting to dwindle, then sort the next batch of tickets to be used.  This ensures you are not wasting time by:

  1. Not sorting tasks that will never get done
  2. Sort only the priorities you can handle now
  3. Allows you to shift priorities quickly
  4. Keeps your tasks prioritized
  5. Allows you to create a Pull system
  6. Preventing work on not-ready tasks
In other words, you are not wasting efforts on predicting the future while keeping a nimble team and task prioritization. But how do you know where you have started/ended your sorting in your Backlog?  This is where the Ready-line is useful.

describe the image

The Ready-line keeps important tickets at the top of your Backlog in the Planner view, preventing them from getting lost. Drag-and-drop the bar to the desired location or drag and drop tickets into this area to separate work that is ready to be worked on from work that is still be flushed out.

Tickets above the Ready Line can be easily moved into your Current milestone or vice versa. This ensures that the most important tickets are next in the queue while giving a clear area for your developer's to pull work from.

To get a free Assembla Renzoku plan with the Ready-line, sign up here

3 Comments Click here to read comments

Something Cool with Hosted Repositories at Assembla is Happening

Posted by Michael Chletsos on Mon, May 13, 2013
  
  

We announced our latest feature Server Side Hooks the other day. But before we even did that, something very cool happened, we got our first hook submitted by a contributor outside of Assembla. Thanks so much Jakub.  Now users with Subversion repositories can install this hook and check their PHP code syntax.

mpchlets@oberon  ~ tmp michaels space 021

We could never have had the time to think of creating nor actually implementing a solution for checking PHP code, because we would want to check all sorts of style of code and the scope would grow. Now users can scratch their own itch with minimal effort on our part.

For those of you still not sure what I am talking about: We are allowing customers to write their own Server Side Hooks and install them on our Servers, that’s right, you can extend Assembla’s cloud repository offering.

Thanks again and keep those hooks coming.

0 Comments Click here to read comments

Server Side Hooks on a SaaS repository? ✓

Posted by Michael Chletsos on Tue, May 07, 2013
  
  

Oh BTW, you can have Server Side Hooks in a SaaS Repository.

Cloud repository hosts have failed us. The power of hosting your repository locally is the ability to implement Server Side Hooks. These hooks allow you to control your repository and the source code contained within.  Its super convenient for an organization with many contributors to a single repository. You can syntax check code, ensure commit messages are proper, add the power of automation or anything else you need your repository to do better than if you were relying on external webhooks.

To add a Server Side Hook in your current Assembla Repository - go to the Settings Page -> Server-Side Hooks:

server side hooks

  • Git: pre-receive, post-receive and update hooks

  • SVN: pre-commit, post-commit, pre-revprop-change and post-revprop-change hooks

  • Community Supported: Submit your own hooks or partake in the fruits of another’s labor

  • Prevent commits that do not comply with your Coding Standards

  • Validate commit messages for status updates and valid ticket reference

  • Create Workflows with specific status and ticket changes or kick off external procs

We are very excited about Server Side Hooks and hope that you find them as useful as we do. Take a look at some of our other available Repository Features.

0 Comments Click here to read comments

Your Personal Source of Information

Posted by Michael Chletsos on Mon, Mar 18, 2013
  
  

Finding Focus for Your Priorities

Why are none of your priorities being worked on? You have told everyone on the team over and over again what they are. You ask everyone if they are working on the priorities, they assure its their number one task. And that's all you hear over and over again.

Insanity: doing the same thing over and over again and expecting different results. --Albert Einstein (attributed)

Stop the insanity, enter the Cardwall. The Cardwall is a Kanban board, but its so much more. When your team works with a Cardwall and keeps tickets up to date, the Cardwall is your personal source of information for the current status of work in progress or not in progress.


cardwall design

  • There is no reason to track down each person to interview them on the status of a ticket.

  • You can see if your priorities are in progress and if so, you can open up the ticket to view progress of that specific task.

  • Code commits, developer and product manager comments, QA results, merge requests, file attachments and many other parts of the discussion are found in the details of the ticket.

  • The ticket is the most accurate form of information that one can view.

Learn more about other collaboration features at Assembla, click here.  



2 Comments Click here to read comments

Assembla Introduces One Button Deploys and More

Posted by Michael Chletsos on Tue, Jan 22, 2013
  
  


SSH Frequency1Have a complicated deploy process?  Want to simplify and standardize?  Assembla has just released a new beta tool, the SSH Tool.  It allows you to run any command remotely on a server straight from Assembla.  The script can be run manually, based on time frequency, or triggered by a repository event.  This allows you to setup deploy scripts, then deploy with one click - no matter how complicated your deploy is.  We use it internally to deploy assembla.com and to manage our servers.  

The possibilities are limitless with the SSH Tool, you can provision AWS servers, kick off Continuous Integration processes, and run deploys to any environment. What is extra nice is that you have a centralized place to review and monitor these processes as they run. For example,  Assembla has one script that lists all the current Staging environments running on AWS, very useful when trying to find an IP address of a Staging server.

SSH Tool

Self Documenting & Traceable

You no longer need to ask how the process ran, its always available to you in your space as the output to the process.  This is invaluable for a remote team where people can seemingly disappear without cause or reason.

The output of each run is stored and available in your SSH Tool.  This allows you to easily see the last action run for a script.  There is no question whether someone remembered to run a script or not, you can see it in the Tool.  When using it as a deploy tool, you can see the last time that you deployed to Production or to your Staging environments.

Standardize

Too often, I find that one operations person does something different than another operations person; this leads to confusion and non-standard practices.  By giving people a button to press, the script is always run the same way and you can expect the same results and setup each time.  This is invaluable when doing common work in Production.  For Assembla, it has allowed us to move the deploy process from Operations to our Developers.  Now Developers are free to Deploy to Production whenever they see fit. 

Technology

So how does it all work?  Well, you upload an RSA key that we generate to your ssh account on the remote server.  There is only one key, because you only need one key to identify the SSH Tool.  We provide you the key to ensure that the key is not used anywhere else.  

Once the key is in place, it's just as if you were ssh’ing to the server and running commands, any commands.  The output will be relayed back to the tool and stored for your convenience.  The result of the script run is determined by the last result returned to the process, if its a 0 - success, if > 0 failure.  

We suggest that you run your scripts with nohup on a unix/linux system, just in case the process has a network failure and loses a connection.  Nohup will ensure that the process continues running even if the Assembla connection goes down.  Screen is another alternative.

That’s it, so go out and start running commands on your servers right from the Assembla workspace.  Please let us know what you think about this simple, yet powerful tool.

Thank you Artiom Diomin, Stanislav Kolotinskiy, Ghislaine Guerin for your work on getting this tool out to the community.

Get this and many other features for free with Assembla Renzoku.

2 Comments Click here to read comments

Continuous Deployment is Secure: How to Patch 3rd Party Apps Uber-Fast

Posted by Michael Chletsos on Wed, Jan 02, 2013
  
  

Today, a high risk Security Bulletin was posted for Ruby on Rails.  Assembla was able to process this request and patch within 3 hours from the posted bulletin.  We did this working solely within our normal, everyday process.  This is the power that a good Continuous Deployment process brings to the table. 

ruby rails patchBeing able to patch 3rd party applications is rather important these days.  As we rely more on them, we become more vulnerable.  High profile security bulletins are common, we love Hacker News, but its not a secret, and the number one posting today is about the Ruby on Rails vulnerability.  This means that everyone else knows about it as well - so the clock starts ticking.  How important is your data?  Ignoring a problem like this can be the end of your business, whereas getting a fix out quickly will make your customers feel better and safer knowing that you have them covered and possibly give you a competive edge as everyone else is running around patching and fixing issues.

In walks Continuous Deployment (for more information, see definitions) to help you streamline your process.  At Assembla, we were able to patch our codebase, test it via our CI server, do quick QA analysis and then push right out to production with no bottlenecks.  It took longer to get notified than to start the process, the conversation went like this:

[12:59:49 PM] Lead Dev: https://groups.google.com/forum/?fromgroups=#!topic/rubyonrails-security/DCNTNp_qjFM
Did we patch?
this is on top of HN
[1:00:27 PM] Me: no
I will create a ticket and get it looked at
[1:00:39 PM] Lead Dev: lets patch now
[1:00:44 PM] Me: OK

Time went on as he pushed the patch to our Origin and our CI process kicked off, we had some failed specs, that was dealt with, then we got ready for deploy:

[2:05:18 PM] Me: and we are going to deploy

So approximately 1 hour after the bulletin was realized, we had deployed the patch out to production.  That is nice work, we even had spec failures that alerted us of potential issues, but did not stop the process.  Deploy takes about 10 minutes and is completely automated, after you press the button.  That's it, no big deal to update a major component quickly.

Check out how we accomplish this with:

A Better Git Workflow

A Better Integration Strategy

A Good CI strategy

And lots of Automation.

Learn how you can Achieve Continuous Delivery

Does your deploy process compare with this speed? If not, maybe you want to check out how Assembla can help with your Continuous Delivery process

0 Comments Click here to read comments

Which Git Workflow is Best? Mine of Course.

Posted by Michael Chletsos on Fri, Nov 30, 2012
  
  

Git is the rage, we are all rushing to move over to it.  But there is a critical problem, git allows for almost any workflow imaginable.  I mean, I can use a straight svn style workflow with a master (similar to svn trunk) branch and have development or release branches from that.  I can implement the rather common nvie workflow or use a Gerrit workflow.  I can come up with some other masochistic workflow.  So what is the best git workflow.  Honestly, I don’t know.  It truly depends on your needs and your situation.  

For open source projects, it seems that forking and merging back into a master repository is working well.  For businesses, well they are still trying to grasp what git is and how they can utilize it effectively.  What I have found works rather well for Assembla, boasting 25-30 developers at any given time, is a forked network that has one repository per developer and a common origin.

The Story

So here is the story, Assembla transitioned to git and did what everyone was doing, had no idea how to work in it - so we used it, as I like to call it, “subversion-style”.  It was not utilizing the power of git for what it was, but it worked, so why fix it?  Well, keep reading to understand why.  We were releasing about 1-2 times per month, nothing impressive.  We got our features out and they had bugs.  We would fix them.  The team was typically stressed, we were either in a release sprint or in a bug fixing mode.  There seemed little time to breathe and releases started getting more complicated and more error prone as time went on.  We knew this had to change.  We realized that we had to move QA and the way developers were working in the codebase.  Read more about our workflow and conclusions where we Learned to Avoid Premature Integration.  This took a radical change to the way we worked in git.  In the end we are releasing software several times a day while the development team has less stress and issues because of the process change.


The Conclusion

Our developers work in forks, with a common origin.  This allows them to have as many branches as they like and create whatever tags they deem necessary without interfering with anyone else, as well as break their build or have unreleasable code in it.  They then merge their work upstream as well as merge changes from upstream to get other peoples work.  Since all the work is pivoting around a common origin, and since this common origin is always considered stable, the work is able to be integrated with production code and shared amongst developers without conflicting with each other.  Of course conflicts will occur, but they are dealt with by the person who contacts them.  Read more about this problem where we explain the Continuous Delivery process.  The basic structure looks something like this:

Its very simple, but that is what makes it so flexible.  One developer will not block another developer’s work.  Prior to code being merged into origin/master, origin/master must be merged back to the dev/master repository or a temporary branch from origin/master with the merge set from the dev/master repo must be created to test for conflicts or test other issues.  Whenever code is committed to origin/master, it must be considered stable and ready for deployment to production.   We use Merge Requests internally to control the flow of code from developer’s repositories to origin, read about how you can use Merge Requests to Code Review.

The above process can be used in teams as well, allowing for a more rigid architecture, and more control points.  If we move the developers down into branches on a team branch, and have them submit Merge Requests up to the team/master, then teams submit Merge Requests from their team/master to origin/master.  This is where Assembla went before transitioning developer’s into their own forks.  We had a git architecture that looks like this:

Now the work is aggregated into team/master before going forward into origin/master.  We found that often releases were delayed and one developer’s work would interrupt another developers work.  We only moved the problematic points to team/master instead of origin/master.   By moving the pain points to dev/master, we keep the problem where it belongs, at the developer’s fingertips.

Assembla recently finalized the switch to working in developer only repositories, we have seen great success so far.  And every developer is learning more about the tool that seems so hard for many to grasp - git.  And we know that it is already far better than our “subversion-style” workflow:



To learn more about how to utilize Assembla to achieve a better workflow, read Git Review and Merge Like a Boss and Avoiding Premature Integration: or How we learned to stop worrying and ship software everyday.

0 Comments Click here to read comments

Dogfooding and Guineapigging

Posted by Andy Singleton on Mon, Nov 19, 2012
  
  

You are probably aware that when developers use their own software, we say they are “eating the dog food.”  This motivates them to improve usability, quality, and features.  I’m going to mangle this into the verb form “dogfooding” to describe the practice of using software prior to a public release.  I’m also going to introduce the concept of “guineapigging,” in which you force innocent users to consume your dog food (guinea pig food?).  You might find it useful to throw around some of these terms at a meeting where you are discussing product management, test, and release.

In the old days, dogfooding was called “alpha testing.”  People working on the software would periodically get disks containing pre-release software, and they would install it, or else.  The practice is extremely common among modern Web developers.  They use switches, as described here, to show early features to employees.  It’s common to show new features for employee accounts, even at banks and insurance companies that have conservative QA  practices.  Dogfooding is everywhere.  If you are aggressive in forcing your stakeholders to try new features, you will make rapid progress.

Guineapigging was called beta testing.  You could often find users who would be willing to take “less stable” builds of software in order to see new features.  During the browser wars of the 90’s, Netscape gained the lead for a time by always offering a beta version.  It’s usually possible to find some set of users that will submit to beta testing if you make it part of your product strategy.

Guineapigging is often more risqué.  On high-volume Web sites, it’s pretty common to turn on new features or changes for just a few servers or user accounts.  Then, you measure what happens.  Where switches are a key technique for dogfooding, logging and measurement are a key technique for guineapigging. Users don’t even know that they are part of a vast evolutionary conspiracy.

3 Comments Click here to read comments

Take Control with the Continuous Delivery Dial

Posted by Andy Singleton on Wed, Oct 31, 2012
  
  

By now you know that Assembla supports continuous delivery, a process for releasing changes every day. We’ve offered up the secret sauce and explained our own experience.  However, most teams cannot release every day.  Maybe they make products that are distributed much less frequently, or maybe their systems require additional steps like localization, user acceptance testing, and management signoff.  For these cases, we present the “Continuous Delivery Dial,” from Steve Brodie and Rohit Jainendra at Serena.

cd dial resized 600

The dial represents the steps you take to release software from design and development on the left, to final release on the right.  You can run continuous delivery in your development process up to a certain point on the dial.  After that point, you drop the software into a “release train,” as Serena calls it, where it goes through some batch processes.  You can adjust the point where you drop into the batch process.

Serena has built a Release Manager product which helps you track the releases in your train, along with a Deployment Manager that builds the destination servers, delivers the releases, and tells you where they were delivered.  One of our customers calls the point where developers decide what gets loaded into a release “the dock,” so I guess they have release boats instead of a train.  I’m going to get my guys some imaginary jets.

Why would you want to run continuous delivery only on the development side of the release cycle?

1) In our experience, continuous delivery is less annoying and less stressful for developers.  It’s annoying to send stuff away for QA and then get it back with a bug  report two weeks later which results in having to drop what you are now working on and set up the whole development scenario again.  If you can get into a lean process like continuous delivery, you might be able to actually work on one thing, finish it, and go on to the next thing.

2) The continuous delivery process is also more scalable than iterative processes.  As you increase the size and productivity of your development team, you have more stuff to integrate and test in every iteration.  If the integrate and test phase starts getting longer, you might have to move to continuous delivery.

3) You might want to increase the speed or frequency of your release process so you move the development process to continuous delivery, and then work on the rest of the “dial.”

You probably already have several versions of the "dial" already.  For example, most teams use fewer release steps for a high priority bug fix, than for a big new feature.  The dial gives you a convenient way to talk about the release process in each case.

Spinal Tap fans will be happy to learn that there is an “11” on the continuous delivery dial.  Some Web sites put a change onto production servers, measure what happens, and then if it works, merge it back into their gold master version.  I call this the 11 because you are sending changes to users before accepting them into a release.  It’s risky, but it’s an effective testing process that makes the gold master version cleaner for the other developers.

0 Comments Click here to read comments

Avoiding Premature Integration to Reach Continuous Delivery

Posted by Michael Chletsos and Titas Norkunas on Mon, Oct 08, 2012
  
  

or: How we learned to stop worrying and ship software every day

We have been writing about testing and tools in our day-to-day Continuous Delivery process. However, we did not write about our development workflow yet. So, here you go. Did we show you the graph?

 

Just one year ago we were releasing once or twice per month and this September we’re at 55 releases a month. All with the same development team. What does this mean to our customers? They get updates from us every day. Sometimes every hour, sometimes more than once an hour. What does that mean to us? No iterations, no useless stalling, no situations where one ticket breaks something and is poisoning the whole branch.

What was holding us back? Premature integration to the master branch turned out to be the culprit. Let me provide some context about our previous branching model and ticket workflow. For those eager ones, you can skip to the solution.

Old Workflow

In terms of code versioning and servers we had a few predefined branches that also matched QA servers - a production (prod) and a development (dev) branches that would be deployed on respective qa servers (prod-qa and dev-qa). We would patch production with things tested on prod qa-server and do feature development and non-high priority bugs in dev.

In terms of ticket flow, our development team works (this has not changed in the new process) according to these guidelines:

Ticket Status

Description

New

Ticket gets created, and Tech Leads prioritize the ticket in the Agile Planner

Accepted

Developer picks up the ticket and writes code for it

Review

If first developer does not feel sure that ticket is ready to go, another developer reviews the code

Test

If after peer review, developers do not feel sure that ticket is ready to go, they ask for help from our QA team who does some testing.

Failed

Ticket goes to this status if it fails testing

Deploy

Admin deploys code to production

Fixed

Ticket is recognized as resolved.

If at any point a ticket does not pass a test it gets set to Failed. The original developer usually picks failed tickets and works on them to fix the issues.

Problem

Major problem with such workflow for us was few tickets poisoning the whole batch. Once ticket got merged to dev, if it turned out that it was not good to go, it was keeping the whole batch until some action was taken on it - either fix the issues or revert the poisonous code.

This was severe, given that Assembla employs people from all over the world and we are big on asynchronous development. Possible “easy solutions” for the problem:

  1. Have very thorough specifications.
  2. Have a rigid peer review process
  3. Have a release manager who maintains the dev branch and either fixes or reverts the poisonous code.

All of these do not work, or are short term solutions, or bring slowness as side effects. In essence, these solutions scale poorly. Is there a solution for this problem that scales well?

Solution

Lets analyze this Cardwall view. Can you spot the problem?

There are tickets to be released, but the whole batch can not be released because some of the tickets are not ready. However, some (12) of them are ready. These tickets are not even related, but Ticket #15711 (as well as some of the tickets in Accepted/Review/Test, if they have code committed to master) stop us from releasing all the tickets in Deploy. Recognizing this leads us to the most important insight - every action, that can put a ticket back in status must happen before integration to master. We decided to change just this one thing in our process. Master integration must happen at the last possible step. The result...Did we show you the graph?

The later the merge, the better - this means that issues will be caught before the integration most of the time. Have you heard of those companies, that do not let their developers go home until master build is green? Have you stayed late fixing someone else’s code after they are not there and their code is failing user acceptance testing? Yeah, forget about those ruined evenings.

We’ll continue to write about our development workflow and how we managed to move integration to be the last step of the process on this blog, so if you read until here, you might as well subscribe to our blog.

12 Comments Click here to read comments

All Posts | Next Page

Follow Assembla

twitter facebook youtube linkedin googleplus

Subscribe by Email

Your email: