Posts in "tech"
Coté Memo #054: CA World wrap, Docker orchestration
A few weeks ago, I gave one of my occasional lectures at the University of Sydney, a class on the history of virtual reality and art. (Postmodern theories about disembodiment and self-representation from the 90s have found a new lease on life, thanks to Oculus and Google Cardboard.)
So that’s what the kids are learning about down under.
http://www.theregister.co.uk/2014/11/12/the_last_pc_replacement_cycle_is_about_to_start_turning/
And all of it, Nadella maintained, is part of a product focus which is less far-flung than it might look. When considering Microsoft’s products, “I just think about three things,” he said. “There’s Windows, there is Office 365, and there is [cloud platform] Azure. That’s it. Everything else, to me, you can call them features.”
Satya Nadella’s Microsoft Wants To Make Productivity Sexy, Inspiring, And Futuristic
Coté Memo #053: there's a lot of earth for software to eat, day 1 of #CAWorld
docker has a straightforward CLI that allows you to do almost everything you could want to a container. All of these commands use the image id (ex. be29975e0098), the image name (ex. myusername/webapp) and the container id (ex. 72d468f455ea) interchangably depending on the operation you are trying to do. This is confusing at first, so pay special attention to what you’re using.
That paragraph switches around pretty quick there. “You followin’ me, camera guy?!” My OODA loop just got chaffed up.
Coté Memo #052: Two types of clouds and headless doctors
Coté Memo #051: Meetings suck, links galore
Coté Memo #050: not much on Friday, pretty boring for #50
NBC Universal turned to Spark to analyze all the content meta-data for its international content distribution. Metadata associated with the media clips is stored in an Oracle database and in broadcast automation playlists. Spark is used to query the Oracle database and distribute the metadata from the broadcast automation playlists into multiple large in-memory resilient distributed datasets (RDDs). One RDD stores Scala objects containing media IDs, time codes, schedule dates and times, channels for airing etc. It then creates multiple RDDs containing broadcast frequency counts by week, month, and year and uses Spark’s map/reduceByKey to generate the counts. The resulting data is bulk loaded into HBase where it is queried from a Java/Spring web application. The application converts the queried results into graphs illustrating media broadcast frequency counts by week, month, and year on an aggregate and a per channel basis.
…
NBC Universal runs Apache Spark in production in conjunction with Mesos, HBase and HDFS and uses Scala as the programming language. The rollout in production happened in Q1 2014 and was smooth.
Apache Spark Improves the Economics of Video Distribution at NBC Universal – Databricks
Shit’s bonkers out there. If I’d have proposed that to an architect “back in my day,” they’d have told me to go shot myself. They’d say: “uh, so, how about we just make a database table and ETL tool that does that?”
The last part - all those different things used - is amazing. Again, the architect would say: “we write things in Java. Try again.”
Granted, the point is: things like Spark and friends let you move beyond dealing with just tidy data and analitects. But, still, sloppy is as sloppy does, right?