The video recording of this week’s Software Defined Talk, right there. If you prefer audio (I do!) check out the official show-notes.
(Source: https://www.youtube.com/)
The video recording of this week’s Software Defined Talk, right there. If you prefer audio (I do!) check out the official show-notes.
(Source: https://www.youtube.com/)
docker has a straightforward CLI that allows you to do almost everything you could want to a container. All of these commands use the image id (ex. be29975e0098), the image name (ex. myusername/webapp) and the container id (ex. 72d468f455ea) interchangably depending on the operation you are trying to do. This is confusing at first, so pay special attention to what you’re using.
That paragraph switches around pretty quick there. “You followin’ me, camera guy?!” My OODA loop just got chaffed up.
NBC Universal turned to Spark to analyze all the content meta-data for its international content distribution. Metadata associated with the media clips is stored in an Oracle database and in broadcast automation playlists. Spark is used to query the Oracle database and distribute the metadata from the broadcast automation playlists into multiple large in-memory resilient distributed datasets (RDDs). One RDD stores Scala objects containing media IDs, time codes, schedule dates and times, channels for airing etc. It then creates multiple RDDs containing broadcast frequency counts by week, month, and year and uses Spark’s map/reduceByKey to generate the counts. The resulting data is bulk loaded into HBase where it is queried from a Java/Spring web application. The application converts the queried results into graphs illustrating media broadcast frequency counts by week, month, and year on an aggregate and a per channel basis.
…
NBC Universal runs Apache Spark in production in conjunction with Mesos, HBase and HDFS and uses Scala as the programming language. The rollout in production happened in Q1 2014 and was smooth.
Apache Spark Improves the Economics of Video Distribution at NBC Universal – Databricks
Shit’s bonkers out there. If I’d have proposed that to an architect “back in my day,” they’d have told me to go shot myself. They’d say: “uh, so, how about we just make a database table and ETL tool that does that?”
The last part - all those different things used - is amazing. Again, the architect would say: “we write things in Java. Try again.”
Granted, the point is: things like Spark and friends let you move beyond dealing with just tidy data and analitects. But, still, sloppy is as sloppy does, right?
[audio cote.files.wordpress.com/2014/08/u…_010.mp3]
Summary
We discuss thinking beyond human error as Bill starts to summarize the book Behind Human Error. It’s always helpful to look at how the system and process caused the wrong move. Also, thinking about hardware, and some nice feedback from designers.
Subscribe to the feed: http://feeds.feedburner.com/UnderDevPodcast
Your friends @cote and @BillHiggins
Hardware, what is it?
- Coté is confused about how to think about hardware.
- The IBM “brain chip”
Follow-up on “design”
- Some follow-up from a designer: of course they iterate, dummy!
- The air hockey meat-mallet (actually called the “OXO Good Grips Meat Pounder”)
- Where Good Ideas Come From and avoiding the Ferrari to Pinto transformation.
- Design documentary movies: Objectified, Helvetica, and the upcoming Urbanized.
Human Error
- Bill goes over Behind Human Error, causing us to discuss how various pipelines (systems) in product management work in waterfall and non-waterfall mode.
- How do product managers fit in to a design-heavy pipeline?