I've been working on developing a new database CI tool suite, and was talking with a friend about his DB woes. We talked about what problems he was having, where they occurred, and how long they were taking to resolve. His database was really simple. The scripts were pretty much always correct. But some scripts might be forgotten, and his problems usually revolved around tracking down missing changes.
I realized my solution didn't fit his problems at all. I had never worked in an environment that was all that simple. There were always mistakes in the scripts. And it was always painful to correct them. When things went wrong, it was like the DB became the black hole of engineering hours, sucking away everyone's time. We'd try to repair the mistake by hand. Or if we couldn't figure out how to repair it, we'd have to restore from a snapshot of production. And while all these repairs were going on, the engineering department was pretty much down.
Making mistakes was so painful. And trying to use any of the database migration tools out there didn't seem to solve my problems. In some cases, it even made them worse. I always ended up resorting to rolling my own custom database tools.
So that got me thinking... how do you decide what kind of solution you need?
While there are many complex factors that drive DB development costs, there are two that seem to characterize the problem space quite well: frequency of mistakes, and cost of recovery.
When mistakes are rare, and recovery costs are cheap, existing migration tools are the perfect solution. Most of the effort is spent in keeping databases up to date, and making sure all the scripts are deployed as the changes evolve. Migrations are excellent at solving that problem, and handle it beautifully.
When mistakes are costly, migration tools can still be very helpful, but depending on how rarely mistakes occur, and how costly they are, migrations might not be enough. Augmenting a simple migration tool might be a good option.
When mistakes are frequent, there's an entirely different kind of problem going on. Developers aren't struggling with deploying the scripts, they're struggling to create correctly working scripts. Migration tools tend to be intolerant of deploying and recovering from broken scripts. And the tools can make recovery even more cumbersome by imposing additional constraints.
When frequent mistakes are also expensive to resolve... well, life is pain. There's really only two strategies to a less painful life--figure out how to make fewer mistakes, or figure out how to make mistakes easier to recover from.
I haven't found much help in this space in either the open source or commercial market, which is why I set out to try to fill that gap. And I've built custom tools for solving similar problems several times over now, and had the opportunity to make a lot of mistakes. Now I get to do something with all that learning, and hopefully help reduce some of the pain out there. :)
Saturday, August 10, 2013
Friday, August 9, 2013
Wow how time flies...
Wow, has it ever been a while. It was January, February, then suddenly it's August. How the hell did that happen?
Some challenges came up at work that I had to jump in and get involved with, and most everything in my life started going on hold... including my book, and all the community stuff I wanted to do. I got through writing chapter 4 back in February, and that's still where I'm at. But I'm excited to announce having 2 uninterrupted days per week to focus just on going after making this happen. I couldn't be more excited!
Since then the ideas started whizzing around in my head again. After going to lunch with an old friend of mine, I was reminded of an idea I had been struggling with. It had been at least a month, trying to make sense of something I didn't quite get, without words to describe what I thought was there. But like a flash in my mind, he gave me a piece to the puzzle I was missing. It was beautiful.
He was reading Thinking Fast and Slow, a book I read half of, and actually put down. He said, "Association triggers from specific to general." And that it had really struck him, "specific to general."
After reading On Intelligence, and working through the Art of Learning myself, I had been thinking about memory sequencing, and how it might impact our ability to recognize patterns. Like when you see some messed up code, why is that sometimes an idea pops into your head about how to solve it, and other times it doesn't? Why does this seem to happen more in some developer's heads than others? Is this something that can be learned? And can we learn how to do it faster? This was the puzzle I was working on.
Specific to General.
We teach developers design patterns by handing them a book of design patterns. A collection of "aha" moments from our predecessors. But then armed with our new knowledge, we don't seem to have an ability to apply them. When we see our own code, with our own problems, that flash of insight just doesn't happen. Then for some, with experience, it happens.
Well, you could just say it's experience. But couldn't we tailor the creation of the right experiences so that you would learn what you need to learn faster?
My friend had the key to my puzzle. Specific to General. The memory sequencing is critical, and the specific sequence of recognition is the opposite of what we teach.
If I want to have insight that leads to a design pattern, I need to experience specific problem instances that map to the more general pattern. When I scan the code, the similarity of structural pattern to my specific memory should trigger recognition.
Going to have to pick up that book again I think. :)
Some challenges came up at work that I had to jump in and get involved with, and most everything in my life started going on hold... including my book, and all the community stuff I wanted to do. I got through writing chapter 4 back in February, and that's still where I'm at. But I'm excited to announce having 2 uninterrupted days per week to focus just on going after making this happen. I couldn't be more excited!
Since then the ideas started whizzing around in my head again. After going to lunch with an old friend of mine, I was reminded of an idea I had been struggling with. It had been at least a month, trying to make sense of something I didn't quite get, without words to describe what I thought was there. But like a flash in my mind, he gave me a piece to the puzzle I was missing. It was beautiful.
He was reading Thinking Fast and Slow, a book I read half of, and actually put down. He said, "Association triggers from specific to general." And that it had really struck him, "specific to general."
After reading On Intelligence, and working through the Art of Learning myself, I had been thinking about memory sequencing, and how it might impact our ability to recognize patterns. Like when you see some messed up code, why is that sometimes an idea pops into your head about how to solve it, and other times it doesn't? Why does this seem to happen more in some developer's heads than others? Is this something that can be learned? And can we learn how to do it faster? This was the puzzle I was working on.
Specific to General.
We teach developers design patterns by handing them a book of design patterns. A collection of "aha" moments from our predecessors. But then armed with our new knowledge, we don't seem to have an ability to apply them. When we see our own code, with our own problems, that flash of insight just doesn't happen. Then for some, with experience, it happens.
Well, you could just say it's experience. But couldn't we tailor the creation of the right experiences so that you would learn what you need to learn faster?
My friend had the key to my puzzle. Specific to General. The memory sequencing is critical, and the specific sequence of recognition is the opposite of what we teach.
If I want to have insight that leads to a design pattern, I need to experience specific problem instances that map to the more general pattern. When I scan the code, the similarity of structural pattern to my specific memory should trigger recognition.
Going to have to pick up that book again I think. :)
Monday, December 3, 2012
My book is live!
I finally got my book effort officially kicked off now. I'm planning on iterating through chapter development and getting the first early release done in January. Also kicking off my new community group in January for Austin DIG and the Idea Flow project, a lot to do!
My awesome friend Wiley also helped me design the book cover. :)
http://leanpub.com/ideaflow
My awesome friend Wiley also helped me design the book cover. :)
http://leanpub.com/ideaflow
Saturday, July 21, 2012
Breaking Things That Work
I went to NFJS today, and went to a talk on "Complexity Theory and Software Development" by Tim Berglund. It was a great presentation, but one idea in particular stuck out to me. Tim described the concept of "thinking about a problem as a surface." Imagine yourself surrounded by mountains, everywhere you look - you want to reach as high as you can. But from where you are, you can only see potential peaks near by. Beyond the clouds and the mountain range obstructing your view, might be the highest peak of all.
With agile and continuous improvement come a concept of tweaking the system toward perfection. But a tweak always takes me to somewhere nearby - take a step, if it wasn't up, step back to where I was. But what if my problem is a surface, and the solution is some peak out there I can't see? Or even if it is a peak I can see... if I only ever take one small step at a time, I'll never discover that other mountain...
Maybe sometimes we need to leap. Maybe sometimes we need to break the things that are working just fine. Maybe we should do exactly what we're not "supposed to do", and see what happens.
Now imagine a still pool of water... I drop in a stone, and watch the rings of ripples growing outward in response. I can see the reaction of the system and gain insight into the interaction. But what if I cast my stone into a raging river? I can certainly see changes in waves, but which waves are in response to my stone? It seems like I'll likely guess wrong. Or come to completely wrong conclusions about how the system works.
With all to variance in movement of the system - maybe it takes a big splash to in order to improve our understanding of how it works? Step away from everything we know and make a leap for a far away peak?
Here's one experiment. We've noticed that tests we write during development provide a different benefit when you write them vs when they fail and you need to fix them later. How you interact with them and think about them totally changes. So maybe the tests you write first and the ones you keep, shouldn't be the same tests? We started deleting tests. If we didn't think a test would keep us from making a mistake later, it was gone. We worked at optimizing the tests we kept for catching mistakes. But this made me wonder about a pretty drastic step - what if you designed all code with TDD, then deleted all your tests? What if you added tests back that were only specifically optimized for the purpose of coming back to later? If you had a clean slate, and you were positive your tests worked already, what would you slip in the time capsule to communicate to your future self?
With agile and continuous improvement come a concept of tweaking the system toward perfection. But a tweak always takes me to somewhere nearby - take a step, if it wasn't up, step back to where I was. But what if my problem is a surface, and the solution is some peak out there I can't see? Or even if it is a peak I can see... if I only ever take one small step at a time, I'll never discover that other mountain...
Maybe sometimes we need to leap. Maybe sometimes we need to break the things that are working just fine. Maybe we should do exactly what we're not "supposed to do", and see what happens.
Now imagine a still pool of water... I drop in a stone, and watch the rings of ripples growing outward in response. I can see the reaction of the system and gain insight into the interaction. But what if I cast my stone into a raging river? I can certainly see changes in waves, but which waves are in response to my stone? It seems like I'll likely guess wrong. Or come to completely wrong conclusions about how the system works.
With all to variance in movement of the system - maybe it takes a big splash to in order to improve our understanding of how it works? Step away from everything we know and make a leap for a far away peak?
Here's one experiment. We've noticed that tests we write during development provide a different benefit when you write them vs when they fail and you need to fix them later. How you interact with them and think about them totally changes. So maybe the tests you write first and the ones you keep, shouldn't be the same tests? We started deleting tests. If we didn't think a test would keep us from making a mistake later, it was gone. We worked at optimizing the tests we kept for catching mistakes. But this made me wonder about a pretty drastic step - what if you designed all code with TDD, then deleted all your tests? What if you added tests back that were only specifically optimized for the purpose of coming back to later? If you had a clean slate, and you were positive your tests worked already, what would you slip in the time capsule to communicate to your future self?
Friday, June 8, 2012
What Makes a "Better" Design?
An observation... there are order of magnitude differences in developer productivity, and a big gap in between - like a place where people get stuck and those that make a huge leap.
Of the people I've observed, it seems like there's a substantial difference in the idea of what makes a better design, as well as an ability to create more options. Those that don't make the leap tend to be bound to a set of operating rules and practices that heavily constrain their thinking. Think about how software practices are taught... I see the focus on behavior-focused "best practices" without thinking tools as something that has stunted the learning and development of our industry.
Is it possible to learn a mental model such that we can evaluate "better" that doesn't rely on heuristics and best practice tricks? If we have such a model, does it allow us to see more options, connect more ideas?
This has been my focus with mentoring - to see if I could teach this "model." More specifically a definition of "better" that means optimizing for cognitive flow. But since its not anything static, I've focused on tools of observation. By building awareness of how the design affects that flow, we can learn to optimize it for the humans.
A "better" software design is one that allows ideas to flow out of the software, and into the software more easily.
Of the people I've observed, it seems like there's a substantial difference in the idea of what makes a better design, as well as an ability to create more options. Those that don't make the leap tend to be bound to a set of operating rules and practices that heavily constrain their thinking. Think about how software practices are taught... I see the focus on behavior-focused "best practices" without thinking tools as something that has stunted the learning and development of our industry.
Is it possible to learn a mental model such that we can evaluate "better" that doesn't rely on heuristics and best practice tricks? If we have such a model, does it allow us to see more options, connect more ideas?
This has been my focus with mentoring - to see if I could teach this "model." More specifically a definition of "better" that means optimizing for cognitive flow. But since its not anything static, I've focused on tools of observation. By building awareness of how the design affects that flow, we can learn to optimize it for the humans.
A "better" software design is one that allows ideas to flow out of the software, and into the software more easily.
Monday, June 4, 2012
Effects of Measuring
As long as measurements are used responsibly, not for performance reviews or the like, it doesn't affect anything, right?
It's not just the measurements being used irresponsibly - the act of measuring effects the system, our understanding, and our actions. Like a metaphor - metrics highlight certain aspects of the system, but likewise hide others. We are less likely to see and understand the influencers of the system that we don't measure... and in software the most important things, the stuff we need to understand better, we can't really put a number on.
Rather than trying to come up with a measurement, I think we should try and come up with a mental model for understanding software productivity. Once we have an understanding of the system, maybe there is hope for a measurement. Until then, sustaining productivity is left to an invisible mystic art - with the effects of productivity problems being so latent, by the time we make the discovery, its usually way too late and expensive to do much about it.
Productivity understanding, unlike productivity measuring, I believe is WAY more worth the investment. A good starting point is looking at idea flow.
It's not just the measurements being used irresponsibly - the act of measuring effects the system, our understanding, and our actions. Like a metaphor - metrics highlight certain aspects of the system, but likewise hide others. We are less likely to see and understand the influencers of the system that we don't measure... and in software the most important things, the stuff we need to understand better, we can't really put a number on.
Rather than trying to come up with a measurement, I think we should try and come up with a mental model for understanding software productivity. Once we have an understanding of the system, maybe there is hope for a measurement. Until then, sustaining productivity is left to an invisible mystic art - with the effects of productivity problems being so latent, by the time we make the discovery, its usually way too late and expensive to do much about it.
Productivity understanding, unlike productivity measuring, I believe is WAY more worth the investment. A good starting point is looking at idea flow.
Thursday, May 24, 2012
Humans as Part of the System
I think about every software process diagram that I've ever seen, and every one seems to focus on the work items and how they flow - through requirements, design, implementation, testing and deployment. Whether short cycles or long, discreet handoffs or a collapsed 'do the work' stage, the work item is the center piece of the flow.
But then over time, something happens. The work items take longer, defects become more common and the system deteriorates. We have a nebulous term to bucket these deterioration effects - technical debt. The design is 'ugly', and making it 'pretty' is sort of a mystic art. And likewise keeping a software system on the rails is dependent on this mystic art - that seems quite unfortunate. So why aren't the humans part of our process diagram - if we recognized the underlying system at work, could we learn how to better keep it in check?
What effect does this 'ugly' code really have on us? How does it change the interactions with the human? What is really happening?
If we start focusing our attention on thinking processes instead of work item processes, how ideas flow instead of how work items flow... the real impact of these problems may actually be visible. Ideas flow between humans. Ideas flow from humans to software. Ideas flow from software to humans. What are these ideas? What does this interaction look like?
Mapping this out even for one work item is enlightening. It highlights our thinking process. It highlights our cognitive missteps that lead us to make mistakes. It highlights the effects of technical debt. And it opens a whole new world of learning.
But then over time, something happens. The work items take longer, defects become more common and the system deteriorates. We have a nebulous term to bucket these deterioration effects - technical debt. The design is 'ugly', and making it 'pretty' is sort of a mystic art. And likewise keeping a software system on the rails is dependent on this mystic art - that seems quite unfortunate. So why aren't the humans part of our process diagram - if we recognized the underlying system at work, could we learn how to better keep it in check?
What effect does this 'ugly' code really have on us? How does it change the interactions with the human? What is really happening?
If we start focusing our attention on thinking processes instead of work item processes, how ideas flow instead of how work items flow... the real impact of these problems may actually be visible. Ideas flow between humans. Ideas flow from humans to software. Ideas flow from software to humans. What are these ideas? What does this interaction look like?
Mapping this out even for one work item is enlightening. It highlights our thinking process. It highlights our cognitive missteps that lead us to make mistakes. It highlights the effects of technical debt. And it opens a whole new world of learning.
Subscribe to:
Posts (Atom)