Container Wars, Serverless, No-Code are the “en-vogue” buzz words in the technology market today, but if you are not competing in this space now, where do you look to leap ahead and regain that crucial competitive advantage.
One area has caught the attention of many is Conversational Programming.
Once the stuff of science fiction, where people could talk to a computer and it comprehended what you wanted and created it all from voice commands with limited or little instruction was the at the same level of awe and wonder as people watching TV programmes such as “Star Trek” and seeing the characters being able to do video calling from one side of a planet to the other, all without wires.
Both are now reality, albeit that Conversational Programming is still in its infancy. But what is it?
Conversational Programming can be defined as a programming capability to help you with building infrastructure, storage instances, platforms and applications by tracking the state of what you are building. This includes user selection, as a context for interpretation, and is constantly running your program in order to give you informal feedback and improve it.
This enables greater visibility of how your code is working to build more efficient and cost-effective capabilities as you are better equipped to map the costs of each stage of the application lifecycle and workflow.
To give this context, let’s look at an exclusively AWS example in order to simplify the proposition as much as possible.
Taking an Amazon Alexa, it is possible to build a set of skills that can enable an application to be built from the Serverless Application Repository as all of the component parts can be defined as services within it. By mapping each of these component parts, it is possible to use the micro-economics of each stage of the workflow to see its cost and value. This can greatly increase the speed in which capabilities can be developed but they rely on two key things underlying them to be successful.
Firstly, understanding the cost and value implications. By aggregating the entire workflow together to define its value and cost creates a capital cost flow, or more simply put, a way of seeing how Revenue – Cost = Profit in what you are building.
Given that there is ever increasing pressure on IT leadership to control and manage costs whilst at the same time a rise of awareness that spend in cloud is not as easily defined as traditional IT, the ability to map capital flows is a critical component for CIOs to understand and explain the value of what their teams are delivering.
CFOs, justifiably, want proof of investment and CIOs must deliver that evidence. However, they cannot do it alone.
The future leaders of IT organisations sit in DevOps and architecture teams today and they must be taught the cost implications of their design and decision making in addition to the technical ones from the outset to equip them for today and the future.
Secondly, the quality of the code and components in the Serverless Application Repository has to be good enough. Fundamentally, this is about good IT practice. Components must be in a mature state and therefore any changes to them could mean significant impacts across multiple capabilities when changed. This where a ‘go slow to go fast’ attitude actually is beneficial to ensure that unintended consequences of change do not bring havoc. It is a delicate balancing act.
In summary, the pace of technology development shows no sign of slowing down. The issues of non-traditional IT practices are still maturing and that leaves uncertainty, however, those that are actively engaging in this space are gaining the competitive advantage as they are forcing good IT practice (cost) to drive better outcomes (value). It is a long term play but one that will ultimately provide better evidence to CFOs, but in the meantime will demand close management.