Building out the IoT backend at Ford

Predicting data storage requirements of 200PB by 2021 – growing from today’s 13PB – Ford chief exec Mark Fields said in a canned statement that the new bit barn “will increase the ability of Ford’s global data insights and analytics team to transform the customer experience, enable new mobility products and services, and help Ford operate more efficiently.”

Link

Michael Dell on Pivotal, 130% growth y/y

Pivotal had a pretty major milestone this last quarter, over a quarter-billion dollars in 2016 bookings, up 130 percent year-over-year. The momentum with Pivotal in terms of digital transformation is white hot. Pivotal is now engaged in a third of the Fortune 100, and I would say the strategy so far has been to only go after the tallest buildings in the city. The opportunity to take Pivotal Cloud Foundry into Fortune 2000 and beyond is enormous. That’s going to represent a big opportunity for partners.

From a a recent interview.

Rackspace positioning around cloud and OpenStack, from the CEO

Now that they don’t have to compete with AWS, they have an extra $300m floating around in the spreadsheets:

“Ultimately now it’s about how are we going to build a stronger company. If we don’t have to go spend $300 million a year in capital competing against Amazon, building computing storage and networking, where should we go put that? In things like managed cybersecurity and professional services,” said Rhodes.

On OpenStack, finding the product/market for for private cloud:

And what about OpenStack, the open-source cloud computing platform that Rackspace created with NASA?

“We thought the world wanted another alternative to public cloud,” said Rhodes. “What we are learning is the world doesn’t need another public cloud, so OpenStack is shifting form and going private cloud.”

Also, cameo from my former 451 colleague Carl Brooks.

Link

Omni-channel at Target: 14% of 2016 sales were “digital,” with 68% fulfilled in-store

In 2014, more than 93% of our transactions took place in stores, less than 7% digital. That season we had just started shipping from a small number of stores. In 2015, that same timeframe, digital sales reached almost 10% of our total sales. We more than doubled our ship-from store-capability to nearly 500 stores. We fulfilled 41% of all our digital orders inside of a store.

For 2016, just a few months ago, just last year, digital sales climbed to 14%, more than twice what we did two years earlier. We double ship-from-stores again, more than 1,000 stores. Our stores were fulfilling 68% of our digital orders. We finished December with record digital growth, including record-breaking days on both Thanksgiving and Cyber Monday.

Always nice to see multi-year numbers.

Link

Advice on introducing DevOps from Merrill Corp & SPS Commerce – Highlights

Nicely moderated by Bridget. Some of my notes and highlights:

  • Amy talks about pace of change, sustaining it in the beginning, etc.
    • The amount of time it took us to get going was a surprise – was longer.
    • If you can start to show results early, it helps build up momentum. “Having enough wins, like that, really helped us to keep the momentum going while we were having a culture change like DevOps.”
    • It takes the right people to keep that energy going, but also be able to go back to the business to show that why we are putting these changes in place.
    • You’re going to be able to see the changes to the business right away.
  • Peg – tools, don’t try to fix the old ones, like ITIL service desk tools. Instead we just had Jenkins open tickets and such, automating the toil of dealing with old tools
  • Global/offshore tactics, from Amy:
    • What with all the retrospective stuff, you need to be able to get teams together, physically. The collaboration angles are much better in person
    • Set-up each “shore” as an architecturally and management island, make them as independent as possible. They also need their own context, not held up by time zones so they don’t need to wait 24-48 hours for authorizations and collaboration. [To my mind, this means taking advantage of the organizational de-coupling you can get with microservices.]
  • Starting change, even when they company needs it. Amy: You have to start with the business need, what’s the big driver behind a change like DevOps. [Managers often don’t make sure they figure this out, let alone decimate it to staff.]

Spending from outside of the IT department

Corporate departments outside of the IT department, globally, are forecast to spend $609bn in 2017:

A new update to the Worldwide Semiannual IT Spending Guide: Line of Business from the International Data Corporation (IDC) forecasts worldwide corporate IT spending funded by non-IT business units will reach $609 billion in 2017, an increase of 5.9% over 2016. The Spending Guide, which quantifies the purchasing power of line of business (LoB) technology buyers by providing a detailed examination of where the funding for a variety of IT purchases originates, also forecasts LoB spending to achieve a compound annual growth rate (CAGR) of 5.9% over the 2015-2020 forecast period. In comparison, technology spending by IT buyers is forecast to have a five-year CAGR of 2.3%. By 2020, IDC expects LoB technology spending to be nearly equal to that of the IT organization.

Meanwhile, all in, global IT spend was estimated at $2.4tn in 2016, but that includes telco and consumer tech. And, this demographic breakdown for enterprise IT spend:

In terms of company size, more than 45% of all IT spending worldwide will come from very large businesses (more than 1,000 employees) while the small office category (the 70-plus million small businesses with 1-9 employees) will provide roughly one quarter of all IT spending throughout the forecast period. Medium (100-499 employees) and large (500-999 employees) business will see the fastest growth in IT spending, each with a CAGR of 4.4%.

Sources: Technology Purchases from Line of Business Budgets Forecast to Grow Faster Than Purchases Funded by the IT Organization, According to IDC, March 2017 and Worldwide IT Spending Forecast to Reach $2.7 Trillion in 2020 Led by Financial Services, Manufacturing, and Healthcare, According to IDC, Aug 2016.

The Economist on Amazon – Highlights

  • Video: “In 2017 Amazon is expected to spend $4.5bn on television and film content, roughly twice what HBO will spend. But it has a big payoff.”
  • Prime momentum: “Mr Nowak reckons the company had 72m Prime members last year, up by 32% from 2015.”
  • Cloud: “Last year AWS’s revenue reached $12bn, up by more than 150% since 2014.”
  • Anti-trust, in the US: “If competitors fail to halt Amazon’s whirl of activities, antitrust enforcers might yet do so instead. This does not seem an imminent threat. American antitrust authorities mainly consider a company’s effect on consumers and pricing, not broader market power. By that standard, Amazon has brought big benefits.”

Are investors too optimistic about Amazon?

Vanguard’s thinking on microservices

Breaking up the monolith with good, old fashioned, OO-think:

Instead, Vanguard has begun a journey to break apart our monolithic legacy systems piece-by-piece by replacing them with microservices over time. With a microservices architecture, we remove the business logic and data logic from our applications and replace it with a set of re-usable modules of code that are built and deployed as independent entities. We then compliment this architecture by chunking out our user interfaces into modular purpose-built components.

De-coupling for stability and resiliency, among other things:

This service-based approach to application architecture provides a variety of advantages over the jumble of code that defines a non-modular monolithic application. First, services reduce redundancy by making sure there is only one copy of application logic for a given capability – regardless of how many applications leverage that logic. In the long run, this leads to lower development costs and increases speed to market. Second, since these services are deployed independently and built in a resilient manner, outages in one area of an application are less likely to bring down an entire system. In some instances, several of our services can be down without our clients being aware of a loss in functionality thanks to the ability of our applications to automatically react to a service that isn’t available. Finally, services enable our applications to scale easier. The marriage of cloud and services means we can quickly spin up infrastructure to handle surges in the number of transactions we need to handle without needing to scale up an entire application.

Vanguard CIO: Why we’re on a journey to evolve to a microservices architecture

Australian government’s Cloud Foundry apps in production

Delivery teams are now able to build services faster and easier. In July 2016, DTA had 14 apps in production and 50 apps in development. In October 2016, the numbers increased to 47 apps in production and 225 apps in development.

Australian Government Cuts Release Time with Cloud Foundry, Iterates Faster – Cloud Foundry Live

Pivotal Conversations: Bringing Agility to Enterprise Data Workflows, with Sina Sojoodi

The summary:

This week we talk with about how organizations are increasingly looking to improve how they use data and workflows around data to innovate in their business. As with discuss with our guest, Sina Sojoodi, More than the usual ideas about “big data” and “machine learning,” we talk about the practical uses of data workflows like insurance claims handling and retail optimization. In many large, successful organizations the stacks to support all this processing are aging and not providing the agility businesses want. Of course, as you can guess, we have some suggestions for how to fix those problems, and also how to start thinking about data workflows differently. We also cover some recent news, mostly around Google Cloud Next and Pivotal’s recent momentum announcement.

Check out the SoundCloud page, or download the MP3 directly.

Kubernetes as the hybrid cloud magic maker

From 451’s report on Google Next:

Google believes that a hybrid architecture will persist in the coming years as enterprises continue to migrate workloads to various clouds. Its hybrid cloud architecture revolves around its virtual private cloud. Google VPC is an instantiation of GCP that can dedicate compute, storage and network resources to an enterprise. It is built upon Google’s proprietary private global network designed for high reliability, low latency and hardened security. Kubernetes acts as the orchestration and operational backplane for hybrid implementations. Elasticity and scale are achieved by linking to Google public cloud services.

It also has many numbers on market-share, SI/channel development, and geographic foot-print.

Source: Google Cloud Next 2017: Slow and steady race to greater enterprise public cloud adoption

If compliance is so important, bake it into the platform

Can we take that governance and work with the platform team to codify, to automate that which they were doing on a per application basis – that’s, quiet frankly slowing down the delivery of the software – can we take that governance and can we have them work with the platform team to codfiy, to actually automate on a per application basis, have them expose that as a service on the platform

Cornelia Davis on governance and cloud-native, “Who Does What? Mapping Cloud Foundry Activities and Entitlements to IT Roles,” August 2016

In other words: you should not only automate the audit three-ring binders of compliance, but enforce as much as possible in the platform.

The rest of the talk is good stuff on how think through re-arranging your organization to be all DevOps-y, with the help of Pivotal Cloud Platform to automate all the infrastructure and middleware stuff:

We’re getting exactly the government IT we asked for

If there’s one complaint that I hear consistently in my studies of IT in large organizations, it’s that government IT, as traditionally practiced, is fucked. Compared to the private sector, the amount of paperwork, the role of contractors, and the seeming separation between doing a good job and working software drives all sorts of angst and failure.

Mark Schwartz’s book on figuring out “business value” in IT is turning out to be pretty amazing and refreshing, especially on the topic of government IT. He’s put together one of the better “these aren’t the Droids you’re looking for” responses to ROI for IT.

You know that answer: you just want to figure out the business case, ROI, or whatever numbers driven thing, and all the DevOps-heads are like “doo, doo, doo-doo – driving through a tunnel, can’t hear you!” and then they pelt you with Goldratt and Deming books, blended in with some O’Reilly books and The Phoenix Project. “Also, you’re argument is invalid, because reasons.”

A Zen-like calm comes over them, they close their eyes and breath in, and then start repeating a mantra like some cowl-bedecked character in a Lovecraft story: “survival is not mandatory. Survival is not mandatory. Survival is not mandatory!”

Real helpful, that lot. I kid, I jest. The point of their maniacally confusing non-answers is, accurately, that your base assumptions about most everything are wrong, so before we can even approach something as precise as ROI, we need to really re-think what you’re doing. (And also, you do a lot of dumb shit, so let’s work on that.)

But you know, no one wants to hear they’re broken in the first therapy session. So you have to throw out some beguiling, mind-altering, Lemarchand’s boxes to change the state of things and make sure they come to the next appointment.

Works as Designed

Anyhow, back to Schwartz’s book. I’ll hopefully write a longer book review over at The New Stack when I’m done with it, but this one passage is an excellent representation of what motivates the book pelters and also a good unmasking of why things are the way we they are…because we asked for them to be so:

The US government is based on a system of “checks and balances”—in other words, a system of distrust. The great freedom enjoyed by the press, especially in reporting on the actions of the government, is another indication of the public’s lack of trust in the government. As a result, you find that the government places a high value on transparency. While companies can keep secrets, government is accountable to the public and must disclose its actions and decisions. There is a business need for continued demonstrations of trustworthiness, or we might as well say a business value assigned to demonstrating trustworthiness. You find that the government is always in the public eye—the press is always reporting on government actions, and the public is quick to outrage. Government agencies, therefore, place a business value on “optics”—how something appears to the observant public. In an oversight environment that is quick to assign blame, government is highly risk averse (i.e., it places high business value on things that mitigate risk).

And then summarized as:

…the compliance requirements are not an obstacle, but rather an expression of a deeper business need that the team must still address.

Which is to say: you wanted this, and so I am giving it to you.

The Agile Bureaucracy

The word “bureaucracy” is something like the word “legacy.” You only describe something as legacy software when you don’t like the software. Otherwise, you just call it your software. Similarly, as Schwartz outlines, agile (and all software processes) are insanely bureaucratic, full of rules, norms, and other “governance.” We just happen to like all those rules, so we don’t think of them as bureaucracy. As he writes:

While disavowing rules, the Agile community is actually full of them. This is understandable, because rules are a way of bringing what is considered best practices into everyday processes. What would happen if we made exceptions to our rules—for instance, if we entertained the request: “John wants to head out for a beer now, instead of fixing the problem that he just introduced into the build?” If we applied the rules capriciously or based on our feelings, they would lose some of their effectiveness, right? That is precisely what we mean by sine ira et studio in bureaucracy. Mike Cohn, for example, tells us that “improving technical practices is not optional.”15 The phrase not optional sounds like another way of saying that the rule is to be applied “without anger or bias.” Mary Poppendieck, coauthor of the canonical works on Lean software development, uses curiously similar language in her introduction to Greg Smith and Ahmed Sidky’s book on adopting Agile practices: “The technical practices that Agile brings to the table—short iterations, test-first development, continuous integration—are not optional.” I’ve already mentioned Schwaber and Sutherland’s dictum that “the Development Team isn’t allowed to act on what anyone else [other than the product owner] says.”17 Please don’t hate me for this, Mike, Mary, Ken, and Jeff, but that is the voice of the command-and-control bureaucrat. “Not optional,” “not allowed,” – I don’t know about you, but these phrases make me think of No Parking and Curb Your Dog signs.

These are the kind of thought-trains that only ever evoke “well, of course my intention wasn’t the awful!” from the other side. It’s like with the ITIL and the NRA, gun-nut people: their goal wasn’t to put in place a thought-technology that harmed people, far from it.

Gentled nestled in his wry tone and style (which you can image I love), you can feel some hidden hair pulling about the unintended consequences of Agile confidence and decrees. I mean, the dude is the CIO of a massive government agency, so he be throwing process optimism against brick walls daily and too late into the night.

Learning bureaucracies

The cure, as ever, is to not only to be smart and introspective, but to make evolution and change part of your bureaucracy:

Rules become set in stone and can’t change with circumstances. Rigidity discourages innovation. Rules themselves come to seem arbitrary and capricious. Their original purpose gets lost and the rules become goals rather than instruments. Bureaucracies can become demoralizing for their employees.

So, you know, make sure you allow for change. It’s probably good to have some rules and governance around that too.

Cloud-Native Cookbook – beyond “survival is not mandatory”

I started a new booklet project, the Cloud Native Cookbook.

The premise is this:

The premise of this book is to collect specific, tactical advice transitioning to a cloud-native organization. The reader is someone who “gets it” when it comes to agile, DevOps, cloud native, and All the Great Things. Their struggle is actually putting it all in place. Any given organization has all of it’s own, unique advantages and disadvantages, so any “fix” will be situational, of course.

This cookbook draws from actual experiences of what worked and didn’t work to try to help organizations hack out a path to doing software better. While we’ll allow ourselves some “soft,” cultural things here and there, each of the “recipes” should be actionable, tangible items. At the very least, the rainbows and unicorns stuff should have concrete examples, e.g., how do you get people to actually pair program when they think it’s a threat to their self-worth?

As with my previous cloud-native booklet, I have this one open for comments as I’m working on it. It’d be great to get your input.

Here’s some slides I’ve been using around all this.

Pacing cloud-native transformation, and actually doing the work to increase productivity

I like to tell large organizations that compared to the break-neck pace of “the silicon valley mindset,” they can operate at a leisurely pace. That pace is usually fast for these enterprises, but their problem set and risk profile is a lot different than hats on cats. Abby has a nice, short write-up that hits on this topic among others:

By the end of his first year, Safford and his teams had built prototypes and market tests and finished 16 new software projects.

At Home Depot, they were at about 140 to 150 projects after a year or so. However, it’s common in the first year to do a lot of replatforming of “simple,” mostly cloud-native compatible apps in there. You can do these at a pretty fast clip, with the rule of thumb being 10 apps in 10 weeks. This is in addition to new applications, but explains high numbers like those at Home Depot. I suspect the Allstate numbers are mostly net-new apps, though.

Goals:

Safford’s eventual goal is to shift Allstate software development to 70 percent extreme agile programming and 30 percent traditional scrum and waterfall. Where developers used to spend only 20 percent of their time coding software, today up to 90 percent of their days are spent programming. Each of his CompoZed development labs around the world has the same startup look and feel, including scooters parked in the hallways. This is not your grandfather’s insurance company anymore.

What you hear over and over again from organizations going cloud-native is that developers were spending lots of time in meetings, checking email, and otherwise not coding (and, yes, by “coding” I don’t mean just recklessly LOC‘ing it up without design, and all that). Management had to spend much effort to get them back to coding.

As I fecklessly tell my seven year old when he’s struggling with homework: the only way to finish this quickly is to actually do the work.

(Also: nice write-up from Abby!)

Source: Don’t Forget People and Process in Your Digital Transformation

The role of enterprise architects in cloud-native organizations

My colleague Richard has a nice post suggesting the new functions enterprise architects can play in a cloud-native organization. I like this one in particular, help make the change:

Champion new team organization patterns. As an architect, you can bring developers and operations teams together. Recognize that functional silos slow down delivery. A DevOps-type approach really works. Architects are perfectly positioned to pioneer new team structures that increase velocity and customer attentiveness.

It’s brief, but there’s plenty of other good chunks of advice in there.

Source: How to Remaster Enterprise Architecture for a Cloud-Native World

Making mainframe applications more agile, Gartner – Highlights

In a report giving advice to mainframe folks looking to be more Agile, Gartner’s Dale Vecchio and Bill Swanton give some pretty good advice for anyone looking to change how they do software.

Here’s some highlights from the report, entitled “Agile Development and Mainframe Legacy Systems – Something’s Got to Give”

Chunking up changes:

  1. Application changes must be smaller.
  2. Automation across the life cycle is critical to being successful.
  3. A regular and positive relationship must exist between the owner of the application and the developers of the changes.

Also:

This kind of effort may seem insurmountable for a large legacy portfolio. However, an organization doesn’t have to attack the entire portfolio. Determine where the primary value can be achieved and focus there. Which areas of the portfolio are most impacted by business requests? Target the areas with the most value.

An example of possible change:

About 10 years ago, a large European bank rebuilt its core banking system on the mainframe using COBOL. It now does agile development for both mainframe COBOL and “channel” Java layers of the system. The bank does not consider that it has achieved DevOps for the mainframe, as it is only able to maintain a cadence of monthly releases. Even that release rate required a signi cant investment in testing and other automation. Fortunately, most new work happens exclusively in the Java layers, without needing to make changes to the COBOL core system. Therefore, the bank maintains a faster cadence for most releases, and only major changes that require core updates need to fall in line with the slower monthly cadence for the mainframe. The key to making agile work for the mainframe at the bank is embracing the agile practices that have the greatest impact on effective delivery within the monthly cadence, including test-driven development and smaller modules with fewer dependencies.

It seems impossible, but you should try:

Improving the state of a decades-old system is often seen as a fool’s errand. It provides no real business value and introduces great risk. Many mainframe organizations Gartner speaks to are not comfortable doing this much invasive change and believing that it can ensure functional equivalence when complete! Restructuring the existing portfolio, eliminating dead code and consolidating redundant code are further incremental steps that can be done over time. Each application team needs to improve the portfolio that it is responsible for in order to ensure speed and success in the future. Moving to a services-based or API structure may also enable changes to be done effectively and quickly over time. Some level of investment to evolve the portfolio to a more streamlined structure will greatly increase the ability to make changes quickly and reliably. Trying to get faster with good quality on a monolithic hairball of an application is a recipe for failure. These changes can occur in an evolutionary way. This approach, referred to in the past as proactive maintenance, is a price that must be paid early to make life easier in the future.

You gotta have testing:

Test cases are necessary to support automation of this critical step. While the tooling is very different, and even the approaches may be unique to the mainframe architecture, they are an important component of speed and reliability. This can be a tremendous hurdle to overcome on the road to agile development on the mainframe. This level of commitment can become a real roadblock to success.

Another example of an organization gradually changing:

When a large European bank faced wholesale change mandated by loss of support for an old platform, it chose to rewrite its core system in mainframe COBOL (although today it would be more likely to acquire an off-the-shelf core banking system). The bank followed a component-based approach that helped position it for success with agile today by exposing its core capabilities as services via standard APIs. This architecture did not deliver the level of isolation the bank could achieve with microservices today, as it built the system with a shared DBMS back-end, as was common practice at the time. That coupling with the database and related data model dependencies is the main technical obstacle to moving to continuous delivery, although the IT operations group also presents cultural obstacles, as it is satis ed with the current model for managing change.

A reminder: all we want is a rapid feedback cycle:

The goal is to reduce the cycle time between an idea and usable software. In order to do so, the changes need to be smaller, the process needs to be automated, and the steps for deployment to production must be repeatable and reliable.

The ALM technology doesn’t support mainframes, and mainframe ALM stuff doesn’t support agile. A rare case where fixing the tech can likely fix the problem:

The dilemma mainframe organizations may face is that traditional mainframe application development life cycle tools were not designed for small, fast and automated deployment. Agile development tools that do support this approach aren’t designed to support the artifacts of mainframe applications. Modern tools for the building, deploying, testing and releasing of applications for the mainframe won’t often t. Existing mainframe software version control and conguration management tools for a new agile approach to development will take some effort — if they will work at all.

Use APIs to decouple the way, norms, and road-map of mainframes from the rest of your systems:

wrapping existing mainframe functions and exposing them as services does provide an intermediate step between agile on the mainframe and migration to environments where agile is more readily understood.

Contrary to what you might be thinking, the report doesn’t actually advocate moving off the mainframe willy-nilly. From my perspective, it’s just trying to suggest using better processes and, as needed, updating your ALM and release management tools.

Read the rest of the report over behind Gartner’s paywall.

How JPMC is making IT more innovative with PaaS, public and private

wocintech (microsoft) - 154

A good, pretty long overview of JPMorgan Chase’s plans for doing cloud with a PaaS focus. Some highlights.

More than just private-IaaS and DIY-platforms:

Like most large U.S. banks, JPMorgan Chase has had some version of a private cloud for years, with virtualized servers, storage and networks that can be shared in a flexible way throughout the organization.

The bank is upgrading its private cloud to “platform as a service” — in other words, the cloud service will manage the infrastructure (servers, storage, and networks), so that developers don’t have to worry about that stuff.

On the multi-/hybrid-cloud thing:

By the second half of 2017, the bank plans to run proprietary applications on the public cloud. At the same time, it’s building a new, modern internal cloud, code-named Gaia.

While “hybrid-cloud” has been tedious vendor-marketing-drivel over the past ten years, pretty much all of the large organizations I work with at Pivotal have exactly this approach. Public, private, whatever: we want to do it all.

Shifting their emphasis innovation:

“We aren’t looking to decrease the amount of money the firm is spending on technology. We’re looking to change the mix between run-the-bank costs versus innovation investment,” he said. “We’ve got to continue to be really aggressive in reducing the run-the bank costs and do it in a very thoughtful way to maintain the existing technology base in the most efficient way possible.” …Dollars saved by using lower-cost cloud infrastructure and platforms will be reinvested in technology, he said.

On appreciating the scale of “large organizations” that drive their very real challenges with adopting new ways of running IT:

The bank has 43,000 employees in IT; almost 19,000 are developers.

Good luck having the “we have no process by design” process with that setup.

On security, there’s a nice, almost syllogistic re-framing of “cloud security here”:

For years, banks have worried about using the public cloud out of security concerns and fears of what their regulators will say. Ever since the 2013 Target data breach, in which hackers stole card information from 40 million customers by breaking into the computers of an air conditioning company Target used, regulators have strongly urged banks to carefully vet and monitor all third parties, with a specific focus on security.

“We’re spending a significant amount of time to ensure that any applications we choose to run on a public cloud will have the same level of security and controls as those run internally,” Deasy said.

Most notable corporate security breeches over the year have involved on-premises IT (like the HVAC example above). The point is not to make sure that “cloud is as secure as [all that on-prem IT that’s been the source of most security problems in the past], but to make sure that all IT has a rigorous approach to security. “Cloud” isn’t the security problem, doing a shitty job at security is the security problem.

Update: or it could be 30,000.

Source: Unexpected Champion of Public Clouds: JPMorgan CIO Dana Deasy, Penny Crosman, American Banker

Drunk & Retired podcast archives – being a developer in the late 2000’s

You can now check out most all of the episodes of my first podcast, Drunk & Retired, over at archive.org. I’m not sure where episodes 1 to 32 are, but maybe I’ll find them someday.

I did this podcast with my friend Charles Lowell from about 2005 to probably about 2012 or 2010. Once we got big boy jobs and kids, we dropped off, though we went to do other podcasts: Charles has the Frontside podcast, and I have all sorts of other ones.

Every now and then since then we’ve put one out, but really, not that often.

Webinar Recording: Understand the What, Why & How of Digital Transformation Featuring 451 Research

Earlier this month I did a webinar with Nick at 451. He does a great job summarizing all the digital hoopla going around and I finish up with, predictably, why and how Pivotal can help out there, along with a few customer examples. Check it out!