Here it is, my last day at the summit and therefore in San Francisco.
I am here blogging from the San Francisco airport (SFO if you are IATA-code oriented) – let me tell you about this last day.
No keynote this time. One directly goes to individual presentations right after the breakfast (which was quite late in the morning compared to the other days). The summit ended at lunch time, so I could only attend two presentations and I have chosen presentation from the DevNation track.
Java Puzzlers: Something old, something Gnu, something bogus, something blew, Josh Bloch & Bob Lee
This session was really fun. The two guys on stage really made the audience excited. They were funny and very good.
What did they do, they went through 9 Java puzzles, i.e. snippet of Java codes, and asked the audience what it would print on the screen for each of them. And of course, what the audience expected was never what it actually outputs and then they explain why it was like this, and what is the moral (best practices) which should be drawn from thqt (e.g. be symetrical in your serialization/deserialization operations, be consistent in your APIs, be careful with autoboxing…).
I won’t cover the puzzles I saw there, because they are probably in a book of one of the presenter – that I will probably try to borrow (or steal, or buy if there is no other choice). Mostly they were about collections, circular initialization, inconsistent API, nasty Java language features, autoboxing…
I’m too busy to deal with security, Bill Burke
This second and last session was also very interested. In fact, Bill Burke mainly talked about the project he is working on: Keycloack.
I found this presentation very interested because Keycloak is what we are trying to achieve at work for the Airport IT products I am working on. Keycloak is brand new and not finished (version 1.0 is targeted for June and it lacks important feature like high avaliability), ours is not complete either, but I can say we all share the same raodmap.
What does Keycloak try to achieve? According to the presentation, it is an authentication server, which manages the session and on top of which it is easy to add features like: signin with an external account like Google, two-factor authentication, password forgot, registration, remember me….
Keycloak also manages the user sessions and therefore can provide SSO in a SOA environment. It also provides “single sign off” so that if a user session has expired, the user is logged off everywhere. It does this by generating authentication token (based on JWT) and appear to be secure to handle Cross-origin requests.
Then, it is for me definitely a project to have a close look at.
End of the day and return to France
After a final lunch (some lunch box), the Red Hat Summit was over.
The return flight is in the evening so I had some time in the afternoon to explore San Francisco one last time. I went with my colleague up to the north to see the Golden Gate bridge from a closest point (next to the palace of fine arts). The weather was very nice. Going there, walking and going to the airport took us a good amount of time.
I will now board soon to the plane. Final thoughts on the summit: it was a great experience, lots of interesting people, lots of promising technologies that we should have a look at, and most of all open source rocks.
Red Hat Summit 2015 will be in Boston.
Now I should board to the 11-hour-long flight going back to Europe… See you.
Another day at the Moscone center for the Red Hat Summit.
I did not really talk about the breakfast but this is how each day starts here. You grab a coffee, some pastries, some oatmeal and fruits like this:
And then you sit at tables and you can talk with other people. That’s how I met people from the Clay Institute administrating super mega computers for scientific computations. Quite interesting to talk to these people, who were all workaholic sysadmins…
The Red Hat summit keynote
As usual, the day started with a keynote.
We had first a presentation from Steve Bandrowczak from HP. HP being a partner of the summit, the presentation is somehow following the pattern of the previous days. Steve indeed shared HP views on the current state of IT. His conclusion is that we are now in the era of mobiles, big data, social networks and cloud computing. To that regards, the IT solutions will have to lean toward simplicity, agility, velocity and cost efficiency.
Brian Stevens, CTO of Red Hat, then came on the stage where he quoted the different technologies Red Hat seems to focus on: Docker, Openshift, Openstack to address the cloud and DevOps (continuous integration and continuous deployment aka CICD) trends, as well as Red Hat Storage (based on GlusterFS) to address the Big Data trend.
The keynote ended with a presentation of Sam Greenblatt from Dell whose goals were clearly to show how the Dell hardware is very well adapted to the Red Hat recent solutions: Openshift, Openstack and GlusterFS.
Introduction to OpenShift for application developers, Steven Pousty
This presentation from DevNation was very good, very concrete and very well suited for developers. OpenShift is a Paas solution of Red Hat – a bit like Google App Engine but with so-called “cartridge” where the developer can choose to make his own environment (e.g. an Apache, a Jboss server, a MongoDb instance…). OpenShift is open source so it can be deployed anywhere (in a private datacenter or on a public cloud like Amazon). But RedHat provides an OpenShift public instance deployed on Amazon, and where one can deploy anything they want. A free account gives us the right to have up to 3 OpenShift gear.
(in case you are wondering, the presentation is done with reveal.js).
The presenter did a demo on how to deploy some python code on OpenShift online. And he also deployed a JBoss EAP and a MongoDB instances. My impresssion is that it is very quick, much quicker at least than to request dedicated machines to administrators with custom software installed. Here we can play safely with anything we want. My opinion is that it is so far very good for a development/prototyping usage.
It seems the next development step of OpenShift will be to migrate their “cartridge” to the Docker container technology.
This session was full of people. Docker is trendy! Even though the 1.0 version has not been released yet.
Anyway, Docker is an open source container that enables applications and their dependencies to be encapsulated with the idea to be run anywhere. The idea is to code the application once inside a Docker container and then deploy it anywhere in a consistent and reliable manner (e.g. with the possibility to be scalable, resilient…). I suggest reader to have a look at their website to get a view on how it works.
The demo show how one can easily create a CentOS container with the Docker command tool. And indeed, it seems quite promising although some work still need to be done.
The presenters exposed what are the priorities to address before a 1.0 version can be eventually released:
Enhancing their portfolio of OS and architecture they can “contenarized”
Having a stable control API
Having a stable plugin API to extend Docker
Working on resilience and clustering capabilities
Openstack for developers, Kambiz Aghaiepour & Dan Radez
Well, we can say this day was quite cloud-oriented for me. So Openstack is the Iaas solution proposed by Red Hat.
That was also very interested. During the presentation, we had a little demo where the presenters set up a virtual network of virtual hosts.
They used PackStack to quickly setup an empty Openstack environment they then did the following:
Manage the identity of Openstack users and tenants using Keystone
Create image and add them to the managed image (with Glance)
Create a virtual network (with Neutron)
Create instances (with Nova)
Create file volumes (with Cinder)
Create object storage (with Swift)
They also show the Horizon Dashboard, which is a pretty UI for doing all of this. I especially appreciated how the network topology is displayed in there.
After this presentation, I also took the note that I should try TryStack.org to just be familiar of what it is to be an OpenStack user.
JBoss in the trenches, Andrew Block & Tim Bielawa
That was quite atypic. This was the return of experience of two Red Hat engineers who participated in the migration from EAP 4/5 to 6 of the Red Hat own IT infrastructure! Quite interesting point of view, isn’t it? That took to these guys one year to complete their project. By the way, the title of the presentation doe not seem to be related to WW1.
What I learn from this presentation are a set of tools they used to ease their migration:
They used Windup to help identifying what are the impact of the migration into the code.
For the deployment and the configuration management, they used Puppet (with a special libeap plugin for handling Jboss EAP). They also directly plugged Puppet on Git to automate the deployment as much as possible.
For the monitoring, they used jconsole, Splunk, Munin, Taboot, Nagios and JBoss ON.
Get the most out of EAP6 & the JVM, Ståle W Perdersen & Andrig T Miller
A Red Hat team is always trying to improve the performance of the JBoss EAP for each release going out.
That was the subject of this presentation and it was the occasion to summarize what have been the performance improvements along the EAP releases from 6.0 to the future 6.3.
It seems in fact that most of the performance improvements has been done on Hibernate and Infinispan (also JSP processing are concerned for EAP 6.3). To that regards, the presenters described what they spotted and how they corrected the performance issues.
Not being a huge fan of Hibernate, I did note really noted the tips they gave about it. I however noted the tip concerning Infinispan where one should enforce it to use the ConcurrentHashMap of JDK8 for better efficiency (they are users of Java Mission Control). Also for socket descriptor in EAP 6.1 one should look at nio2 instead of the native connectors. I guess those tips should be easily found on the Web.
The next focus of the performance team for the next version of EAP will be:
Further optimization of JPA and JTA
Profiling of JSF and CDI
Challenging dynamic proxies since they may have certain performance overhead
Evaluating the correctness of used data structures
I had some time in the evening so I did a little walk in the streets of San Francisco, up to Pier 39 where I saw the seagulls and the sea lions. There was a nice sunset over the Bay.
Tomorrow will be the last day of the summit and de facto my last day in San Francisco. Stay tuned for the epilogue of this journey.
This was my second day at the summit. Much more people than yesterday. It seems people really attending the summit have arrived. On monday morning, there was only DevNation. And now thousands of people queued to the breakfast stands for coffee and pastries. The Wifi connection suffers also from those many people carrying laptop or other connected devices.
Anyway, there were again very good presentations. Let me summarize them in this post. I notice that some videos of them are available on the Red Hat Summit Youtube channel.
Like yesterday, it was an American show for this keynote. Three speakers gave their views on the evolution of IT for the coming years with big pictures in the backgrouond. There was in fact one talk from a Red Hat representative followed by presentations from Red Hat Summit sponsors: Cisco and Intel.
Paul Cormier, President of Product & Technology at Red Hat first reminded us how Linux and open source in general has been an asset since the beginning. His speak was in fact the occasion to tell the history of the infrastructure evolving from physical to virtual stations up to the cloud computing and how Linux and Red Hat products based on open source are participating to these trends. And then quoting the new Red Hat products: RHEL 7, Openstack, Openshift…
Padmasse Warrior, CTO from Cisco, came then on the stage with a presentation sharing the vision of her company on what are the current trends of IT. Her focus was on the Internet of Things that she nicely rephrased to “Internet of everything”. Hence the necessity to have a good robust infrastructure which can handle the big amount of data the set of interconnected devices will generate.
Finally, Douglas W Fisher from Intel, concluded the keynote by sharing his opinion on the next generation data center and the challenges which will become predominant in the coming years: security and privacy (and compliance to legislation), performance, uptime, cost, energy efficiency, storage, virtualization and finally data harnessing (aka big data).
Besides, his introductory video was funny, that is why I cannot resist displaying it into that blog post.
The future of middleware: Java, enterprise engineering and Fuse
Some big stars were on stage:
Mark Little, VP of engineering at Red Hat
Rob Davis, the technical director of Fuse engineering at Red Hat
Pete Muir, senior architect at Red Hat (I already saw him at a JUG on the Riviera where he did a presentation on CDI – that was in 2010 and his Scottish accent has not changed much since then)
Surprisingly enough, this talk was not very structured. We could have expected to get a roadmap of the JBoss middleware, but actually the speakers mostly shared their personal opinions concerning how Java middleware should be.
For Mark Little, multi-core, Internet of things, Rest-based architecture, cloud and modularity are now obvious trends. Java will always play a part in it, and most of all Java EE. But one cannot only rely on it since, according to him, technologies like Node.js or Akka seem to fill some gaps (e.g. asynchronous communications) Java EE is not entirely addressing. Hence the Vert.x initiative of Red Hat to get into that path.
Rob Davies emphasizes on the need to evolve toward micro-service architecture, some kind of SOA but without the implicit distribution. And for him, Fuse and Camel will play a substantial role in this direction.
Jason Greene’s point of view was more that the future dwells in the intelligent provisioning of resources while guaranteeing even load balancing, low costs, reliability, security…
Although it was interesting, I am a bit disappointed by these presentations since I would have expected more concrete facts, eventually in the Red Hat middleware product roadmap. But I guess this has maybe been done in other sessions.
Lab: Automate your business with Red Hat JBoss Middleware integration & BPM
During this summit, I wanted to participate to a lab session, so I registered to this one. I believed it was one of the lab where I could have my hands dirty, doing some code.
It was in fact quite click-oriented… But anyway, it was interesting and the lab is quite well orgqnized. I was in a room where each participant has a computer and a tutorial to follow.
The topic? It was about BRMS (business rules management system) and BPM (business process management). I guess the Graal there is to have business analysts able to “code”: either by writing business rules in a BRMS (“if one pay $50, then apply a discount”) or by modelling the flow as diagram in a BPM system.
I admit BPM and BRMS systems have always interested me. I do not share the opinion that business analysts will eventually replace coders (what a nightmare…) but at least BRMS are fun (I like declarative programming and the inference algorithms) and I think BPM could be useful for monitoring and statistic purposes.
I played with the corresponding open source technologies years ago (Drools or jBPM) and with this lab I was quite impressed by the progress made since then, especially concerning the workflow execution monitoring and analytics part of BPM and also the web-based modelling of workflow.
Unfortunately I could only do the first part during this 2-hour lab, but anyway, that was good and the trainer was very friendly.
What’s new with Red Hat JBoss Operations Network, Heiko W. Rupp & Thomas Segismont
In this session, it was mainly to introduce the new feature of the latest version of JBoss ON (aka RHQ for the open source version) – i.e. version 3.2. It was also the occasion of the JON developers present on stage to introduce what will be in the 3.3 version (targeted for around September).
So what is new with 3.2?
New charts to display metrics (at least enhanced charts displayed in the UI)
Storage of metrics move from RDMS to Cassandra
A brand new Rest API is exposed by JON to access metric information or push metrics to JON
a fine-grained bundle permission – so that it is now possible to set security roles per deployable bundles
And what about 3.3? Focus seems to be on limiting the footprint of the sensor agent and on a better support of JBoss EAP 6 monitoring and deployment over it.
All in all, that was a good presentation, very detailed at least. I was pleased to see that JON was used by quite a lot of people present in the room.
In fact, later in the day, I had the luck to intercept one of the speaker, Thomas, while in the corridor of the Summit. He is French and I discussed some of the concerns we have at work concerning monitoring and how we are surveying different solutions (including RHQ) for improving our monitoring. I particularly asked the questions about the convergence of Hawtio and JON which have big similarities. He answered me that the trend in in fact to converge, but the focus seems to be first the introduction of modularity into JON (e.g. the storage layer and the Rest API), so that it can be reused by Fuse products.
After that, I followed him to the “Bird of feather” session about JON. Here I learned that alerting in JON is quite customizable, so I think it would not be a problem if we have to interface with our in-house incident system. Some people also shared about their experience about JON. They are indeed numerous and are mostly using it for JBoss/Tomcat monitoring.
Recipes to analyze common performance issue, William Cohen
That was a DevNation presentation and I admit it maybe did not fit me very well.
The topic was about the performance measurement and what tools to use in which case. The presentation was good because it was a big catalog of which tools to use given the type of performance measurement you want to grab: processor speed, cache performance, memory bandwidth, network/storage bandwidth or latency, locking or synchronization… I find it would be useful to get the slide where everything was resumed. Let see if it is online someday…
But then the presentation went deep to the usage of some Linux tool that I did not know and was very low level: SystemTap, perf, OProfile…
I was a bit lost admittedly, but when I am lost in technical speech at the end of the day, I feel like listening to a nice poetry…
That is the summary of my first day at the summit.
First impressions: awesome presentations (I hope the videos will be online soon), high quality speakers, lots of interesting open-minded people talking about development, nice goodies, nice food, free beers and good coffee.
Other secondary impressions: it seems it is very trendy to talk about DevOps, or how to have developers and operations working in harmony. The other buzz word seems to be Paas – at least in communications from Red Hat (Openshift, Openstack, Docker…).
DevNation Opening Session, Neal Ford
After a good breakfast and a small recap of the DevNation agenda, the first talk was from Neal Ford, software architect, who I think is an independent consultant. His presentation was about the principles of agile architectures. If that sounds a bit “buzzy”, it was in fact very passionate and interesting, highlighting concrete good and bad architectural patterns when it comes to agility, i.e. being able to quickly to respond to changes. For example, the micro-services architecture (which recommend to have isolated independent services rather than a monolithic application) or the CQRS (Command and Query Responsibility Segregation) pattern when designing CRUD applications.
He also derives a bit to other conclusions:
Continuous delivery (i.e. reducing the time between the commit of code and the production of a ready-to-deliver artifact) is an ideal that one should achieve.
Dev and operations should work closely. And the maxim of Amazon has been quoted: “You build it, you run it”, then making developers responsible of the code they have made.
Reactivity should be preferred over planning, since changes are inevitable.
Inspecting JVMs with Hawtio, Stan Lewis
I wanted to attend this presentation since we are starting to investigate some monitoring tools which can be useful at work.
Hawtio is a monitoring and management web console. It comes de facto with Fuse or with Active MQ (which I am familiar with).
The good thing about Hawtio is that it is nice and extensible. It exists a variety of plugins to monitor and manage a wide range of things:
The JMX plugin enable to monitoring of any metrics and operations
The Health plugin can show a page as an aggregation of the health of different components or applications (think green/red panel to give a first-glance health of the systems)
The Active MQ plugin to monitor the queues and topics, send messages…
The Camel plugin enables the visualization (as a graph) monitoring of the different Camel routes. It enable also message tracing and debugging (we can set breakpoints in the route)
The log plugin to browse the log of an application
And much more: JBoss/Tomcat management, a dashboard panel to display any metrics, an OSGi console, a Wiki module to hosts application documentation…
Having the demonstration, it is possible to manage different hosts at the same time. It only requires to plug to an application where Jolokia is deployed (Jolokia is a Rest connector for JMX).
However, to my mind, Hawtio lacks some important feature : the persistence of the monitoring information or the management of JMX notification and alerting.
Resilient enterprise messaging with RH JBoss A-MQ, Scott McCarty and Scott Cranton
I went to this presentation but I did not learn a lot more than from the recommendations of the Active MQ documentation concerning the high-availability topology where it is recommended to have each Active MQ instance “backed-up” by a slave broker. In a network of broker topology involving N nodes, we then have to provision 2N brokers (one active and one passive for each node of the network). The active-passive is done via some locking (the master broker is the one holding the lock) and they should either share a same message store (RDMS or a NAS), either replicate it (but in that case the replication should be synchronous if one does not want to loose messages). The latest version of Active MQ (5.9) enables LevelDB as message store, which there replicates message to at least 2 other nodes (therefore requiring a 3N topology). Although exciting, the presenters said it was maybe not mature enough to risk a usage in production :-)
The presenters also talked about Fabric, which I was not very familiar with. Fabric eases the provisioning and the creation of the Active MQ instances and topologies by centralizing the configuration to the Fabric server. It therefore avoids the configuration of each activemq.xml configuration files.
With Fabric, Active MQ instances are given logical names and can be managed by groups. One can say easily sees which is active or passive, which Active MQ is connecting to which group, etc… We can also avoid the usage of the recommended 2N nodes by making a node agnostic of the Active MQ it is the slave of.
During this presentation Red Hat JBoss Fuse (and therefore AMQ) was just released under version 6.1.
Why everyone needs DevOps now, Gene Kim
In this enthusiastic presentation, it was again an apology of the DevOps or why developers and operations should work closely. The presenter based his conclusions on a various of facts he gathered from various successful companies like Amazon, Facebook, Google, Netfix… He says for example that in Google developers are responsible for managing, maintaining and deploying their code in production for 6 months before an handover to operations.
The outcome are in fact the DevOps principles which can be summarized by:
Production releases should be done at a high rate (e.g. one every 10s on Amazon)
Environment creation should be fast and easy
Automation of testing is necessary
Monitoring is crucial so that developers can see potentially immediately their impact of their deliveries.
I was surprised that most of the analysis he did came from a book I read called “The Goal”, which is mainly about the manufacturing world, which he derived into a book called “The Phoenix Project”.
He also emphasized the concept of delivery of feature vs the delivery of code. He quoted the example of Facebook, which released the chat feature in production far before enabling it. It then enabled a “live” testing of the code (each users had some hidden “chat session” which tested the chat feature directly in production)…
JVM finalize Pitfall, Jason Greene
This presentation was probably the geekiest I attended today (and maybe for my whole life). But it was very good though…
The subject: the danger of implementing the finalize() method in a class.
The method finalize() is called by the JVM on an object right before it is garbage collected. And actually I have always been told that implementing this method was dangerous because of the performance overhead (the method calls before a GC) and most of all because of the nasty side effects (potential leaks or corruption).
So I was a bit surprised about the topic of the presentation. Why talking about the finalize() method?
It seems that the Java specification considers the implementation of finalize legitimate for a particular case: protect resource leaking by closing them on garbage collection. Think for example of a FileInputStream that the user has forgot to close in a finally block. Then this resource should be closed anyway, and this can be done in the finalize() method. Actually it is similar to the C++ destructor…
But implementing a finalize() has many dangers, especially because one does not know when GC is called… The presenter even presented us a case when GC could be called while in a method of an object which is garbage collected?! That was on OpenJDK. The explanation lies in the JDK implementation which may be doing some invisible optimization you do not know.
Hence some tips for doing this, that I won’t copy here since there are on the presenter GitHub. In a nutshell, they may involve synchonized or volatile keywords and they are not so easy to understand.
Here is the start of the Red Hat Summit. Previous presentations were DevNation ones. So as soon as the summit started the Moscone center started to be more and more crowded. Among others, colleagues of mine were there.
The Summit started with a keynote. And that was quite an American show. The auditorium was big, with big screens and broadcasting cameras everywhere…
The keynote was mostly about different trends the Middleware division of Red Hat is going to (Fuse, Vert.x, Fabric…), but the most interesting was the live demo where a bunch of Red Hat developers came on stage to demonstrate the whole middleware stack of red Hat.
So they gathered a pile of laptop in the middle of the stage, they installed Openstack and Openshift on them. They installed Fuse and some Camel routes listening to Twitter feed and sending to an Active MQ queue.
Another application built with JBoss BPM (i.e. jBPM) processes the message of the queue did some analytic, retrieved the Twitter user contact from RH Salesforce CRM (if a match exists) and it yes, send a text message to the user phone.
Quite and impressive demo actually.
After the keynote, we could stroll in the exhibition halls to see different partners. I had some talks with various guys from Red Hat and ElasticSearch which had a booth there.
There was also food and beer.
The final keynote and the 10 year celebration
Here the Jim Whiteburst, CEO of Red Hat, did a dynamic presentation. I was waiting for something like the “developers, developers, developers…” of Steve Balmers, but it was not really like this, but neverminds…
However, here we had the stress on the Paas strategy of Red Hat, something they call xPaas (for extended Paas).
Finally the day ended by a band with dancers suddenly appearing on the stage. The Red Hat Summit is indeed 10 years old, hence this celebration.
The trip was OK… as long as you can enjoy the taste of the economy class meal. (something my Italian’s colleague would call a bad joke, but that airlines catering calls a risotto) and the comfort of the seat (shrunk by some invasion of my neighbor’s body over the armrest). Anyway, the customs let us pass the border and the check in at the hotel was fine, as expected.
I have been strolling. San Francisco has not changed that much since the last time I came. I hiked up to the Coit Tower hill and from there enjoyed the sunset over the Bay Area. No fog, so we clearly see the Golden Gate and Alcatraz in the background. Going down the numerous stairs, I ended up in the Italian neighborhood and then going back to the hotel via Chinatown nearby the Transamerica Pyramid. It’s good to be back in California.
Tomorrow serious things will start, first, at dawn, with DevNation sessions in the morning and then having the Red hat summit opening keynotes in the afternoon.