Tanel Poder

Subscribe to Tanel Poder feed
Recent content on Tanel Poder Consulting
Updated: 13 hours 38 min ago

My Performance & Troubleshooting scripts (TPT) for Oracle are now in GitHub and open sourced

Fri, 2018-05-18 15:13

I have uploaded my TPT-oracle scripts to GitHub and have formally open sourced them under Apache 2.0 license as well. This allows companies to embed this software in their toolsets and processes & distribute them without a worry from legal departments.

The repository is here:

Now you can “git clone” this repository once and just “git pull” every now and then to see what updates & fixes I have made.

Also if you like my scripts, make sure you “Star” this repository in Github too – the more stars it gets, the more updates I will commit! ;-)

While “git clone” is a recommended method for getting your own workstation copy of the repository now, your servers might not have git installed (and no direct internet access), so you can still download a zipfile of everything in this repo too:

You can still directly access individual scripts too using links like the ones below. For example, if you want to run fish.sql to display an awesome SQL fish in sqlplus, you can download this:

Or if you want to run something from a subdirectory, like ash/dashtop.sql for showing ASH top from the historical ASH data in DBA_HIST views, you can download this script from the ASH subdirectory:

Example output below:

SQL> @ash/dashtop sql_opname,event2 username='SYS' DATE'2018-04-19' DATE'2018-04-20'

    Total
  Seconds     AAS %This   SQL_OPNAME           EVENT2                                     FIRST_SEEN          LAST_SEEN
--------- ------- ------- -------------------- ------------------------------------------ ------------------- -------------------
     4930      .1   83%                        ON CPU                                     2018-04-19 18:00:04 2018-04-19 23:48:08
      430      .0    7%   SELECT               ON CPU                                     2018-04-19 18:01:04 2018-04-19 23:49:48
      290      .0    5%   SELECT               acknowledge over PGA limit                 2018-04-19 18:00:34 2018-04-19 23:23:50
       60      .0    1%   UPSERT               ON CPU                                     2018-04-19 18:00:04 2018-04-19 22:00:15
       50      .0    1%   UPSERT               acknowledge over PGA limit                 2018-04-19 18:00:04 2018-04-19 23:13:47
       30      .0    1%   CALL METHOD          ON CPU                                     2018-04-19 18:00:24 2018-04-19 21:03:19
       30      .0    1%                        control file sequential read               2018-04-19 18:56:42 2018-04-19 21:47:21
       30      .0    1%                        log file parallel write                    2018-04-19 21:03:19 2018-04-19 22:13:39
       20      .0    0%   CALL METHOD          acknowledge over PGA limit                 2018-04-19 18:00:24 2018-04-19 22:01:55
       20      .0    0%   DELETE               db file sequential read                    2018-04-19 20:46:54 2018-04-19 22:00:35
       20      .0    0%   SELECT               db file sequential read                    2018-04-19 22:01:05 2018-04-19 22:01:35
       10      .0    0%   INSERT               ON CPU                                     2018-04-19 19:50:28 2018-04-19 19:50:28
       10      .0    0%   INSERT               acknowledge over PGA limit                 2018-04-19 20:43:12 2018-04-19 20:43:12
       10      .0    0%   SELECT               db file scattered read                     2018-04-19 23:03:55 2018-04-19 23:03:55
       10      .0    0%                        LGWR any worker group                      2018-04-19 21:03:19 2018-04-19 21:03:19
       10      .0    0%                        control file parallel write                2018-04-19 21:05:59 2018-04-19 21:05:59

16 rows selected.

Now that I have this stuff in Github, I plan to update my scripts a bit more regularly – and you can follow the repository to get real time updates whenever I push something new.

As a next step I’ll convert my blog from WordPress to static hosting (Hugo) hopefully over this weekend, so you might see a few blog template/webserver glitches in the next few days.

Video: Oracle X$TRACE, Wait Event Internals and Background Process Communication

Wed, 2018-01-24 21:33

I have uploaded the the video of my Secret Hacking Session: Oracle X$TRACE, Wait Event Internals and Background Process Communication to my Oracle performance & troubleshooting Youtube channel.

The slides are in Slideshare.

Enjoy!

 

NB! I am running one more Advanced Oracle Troubleshooting training in 2018! You can attend the live online training and can download personal video recordings too. The Part 1 starts on 29th January 2018 - sign up here!

Secret Hacking Session: Oracle Background Process Communication, Exotic Wait Events and Some Tracing too

Thu, 2018-01-11 14:34

Since I’m running my Advanced Oracle Troubleshooting Training in the end of this month, I’ll do one of my “secret” hacking sessions too for promotion and noise-making reasons next week! ;-)

Secret Hacking Session with Tanel Poder: Oracle Background Process Communication, Exotic Wait Events and Some Tracing too

In this session we will look into some internals of Oracle background process communication and also some special types of wait events that most people aren’t aware of. We will use some exotic tracing for internals research and fun and some of this stuff is actually useful in real life too! I’m not going to reveal everything upfront, as this is a secret internals hacking session after all ;-)

We will use various techniques to research what the “reliable message” wait event is about and how reliable background process communication is orchestrated in Oracle.

This is a hacking session, not formal structured training, so I’ll just do free form demos and talk (probably no slides, just hacking stuff on the command line). I will later upload the video to my Youtube channel too – https://youtube.com/TanelPoder

Oh and it’s free!

Date & Time: Wed 17 Jan 10am PST

Location: GotoWebinar

See you soon!

(I said there would probably be no slides, but maybe I’ll still show one or two ;-)

 

 

 

NB! I am running one more Advanced Oracle Troubleshooting training in 2018! You can attend the live online training and can download personal video recordings too. The Part 1 starts on 29th January 2018 - sign up here!

Advanced Oracle Troubleshooting seminar in 2018!

Wed, 2017-11-29 16:24

A lot of people have asked me to do another run of my Advanced Oracle Troubleshooting training or at least get access to previous recordings – so I decided to geek out over the holiday period, update the material with latest stuff and run one more AOT class in 2018!

The online training will take place on 29 January – 2 February 2018 (Part 1) & 26 February – 2 March 2018 (Part 2).

The latest TOC is below:

Seminar registration details:

Just like last time (AOT 2.5 about 2 years ago!), the attendees will get downloadable video recordings after the sessions for personal use! So, no crappy streaming with 14-day expiry date, you can download the video MP4 files straight to your computer or tablet and keep for your use forever!

If you sign up early and can’t wait until end of January, I can send the registered attendees most of the previous AOT 2.5 video recordings upfront, so you’d be ready for action in the live class :)

I also have a Youtube channel (that you may have missed), there are a couple of introductory videos about how I set up my environment & use some key scripts available now:

I plan to start posting some more Oracle/Linux/Hadoop stuff in the Youtube channel, but this is quite likely the last AOT class that I do, so see you soon! ;-)

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Hadoop for Database Professionals class at NoCOUG Fall Conference on 9th Nov

Fri, 2017-10-27 12:33

If you happen to be in Bay Area on Thursday 9th November, then come check out the NoCOUG Fall Conference in California State University in downtown Oakland, CA.

Gluent is delivering a Hadoop for Database Professionals class as a separate track there (with myself and Michael Rainey as speakers) where we’ll explain the basics & concepts of modern distributed data processing platforms and then show a bunch of Hadoop demos too (mostly SQL-on-Hadoop stuff that database folks care about).

This should be a useful class to attend if you are wondering why all the buzz about Hadoop and various distributed “Big Data” computing technologies – and where do these technologies work well (and not work) in a traditional enterprise. All explained using database professionals’ terminology. And there’s a networking event in the end!

You can check out the event agenda here and can RSVP at http://nocoug.org/rsvp.html. If you aren’t already a NoCOUG member, you can still attend for free as a Gluent Guest using the secret code…. “GLUENT” :-)

See you soon!

NoCOUG Conference – Hadoop Workshop

 

 

 

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Hadoop for Database Professionals – St. Louis (7. Sep)

Mon, 2017-08-28 12:07

Here’s some more free stuff by Gluent!

We are running another half-day course together with Cloudera, this time in St. Louis on 7. September 2017.

We will use our database background and explain using database professionals terminology why “new world” technologies like Hadoop will take over some parts of the enterprise IT, why are those platforms so much better for advanced analytics over big datasets and how to use the right tool from Hadoop ecosystem for solving the right problem.

More information below. See you there!

Hadoop for Database Professionals – St. Louis

Also, Michael Rainey will deliver a SQL-on-Hadoop overview session in Portland, OR on 6. Sep 2017

NWOUG Portland Training Day 2017

 

 

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Apache Impala Internals Deep Dive with Tanel Poder + Gluent New World Training Month

Tue, 2017-07-11 11:51

We are running a “Gluent New World training month” in this July and have scheduled 3 webinars on following Wednesdays for this!

The first webinar with Michael Rainey is going to cover modern alternatives to the traditional old-school “ETL on a RDBMS” approach for data integration and sharing. Then on the next Wednesday I will demonstrate some Apache Impala SQL engine’s internals, with commentary from an Oracle database geek’s angle (I plan to get pretty deep & technical). And in the end of the month, a Gluent customer Vistra Energy will talk about their journey towards a modern analytics platforms.

All together this should give a good overview of architectural opportunities that modern enterprise data platforms provide, with some technical Apache Impala hacking thrill too!

Offload, Transform & Present – The New World of Data Integration

Apache Impala Internals with Tanel Poder

  • Speaker: Tanel Poder, Gluent
  • Wednesday, July 19 @ 12 PM CDT

Building an Analytics Platform with Oracle & Hadoop

  • Speakers: Gerry Moore & Suresh Irukulapati, Vistra Energy
  • Wednesday, July 26 @ 9 AM CDT

You can see the abstracts and register for the webinars here.

We plan to run more technical sessions about different modern platform components and more customer case studies in the future too. See you soon!

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

I’m speaking at Advanced Spark Meetup & attending Deep Learning Workshop in San Francisco

Wed, 2017-01-18 15:50

In case you are interested in the “New World” and happen to be in Bay Area this week (19 & 21 Jan 2017), there are two interesting events that you might want to attend (I’ll speak at one and attend the other):

Advanced Spark and TensorFlow Meetup

I’m speaking at the advanced Apache Spark meetup and showing different ways for profiling applications with the main focus on CPU efficiency. This is a free Meetup in San Francisco hosted at AdRoll.

Putting Deep Learning into Production Workshop

This 1-day workshop is about the practical aspects of putting deep learning models into production use in enterprises. It’s a very interesting topic for me as enterprise-grade production-ready machine learning requires much more than just developing a model (just like putting any software in production requires much more than just writing it). “Boring” things like reliability, performance, making input data available for the engine – and presenting the results to the rest of the enterprise come to mind first (the last parts are where Gluent operates :)

Anyway, the speaker list is impressive and I signed up! I told the organizers that I’d promote the event and they even offered a 25% discount code (use GLUENT as the discount code ;-)

This will be fun!

#meetup_oembed .mu_clearfix:after { visibility: hidden; display: block; font-size: 0; content: " "; clear: both; height: 0; }* html #meetup_oembed .mu_clearfix, *:first-child+html #meetup_oembed .mu_clearfix { zoom: 1; }#meetup_oembed { background:#eee;border:1px solid #ccc;padding:10px;-moz-border-radius:3px;-webkit-border-radius:3px;border-radius:3px;margin:0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; }#meetup_oembed h3 { font-weight:normal; margin:0 0 10px; padding:0; line-height:26px; font-family:Georgia,Palatino,serif; font-size:24px }#meetup_oembed p { margin: 0 0 10px; padding:0; line-height:16px; }#meetup_oembed img { border:none; margin:0; padding:0; }#meetup_oembed a, #meetup_oembed a:visited, #meetup_oembed a:link { color: #1B76B3; text-decoration: none; cursor: hand; cursor: pointer; }#meetup_oembed a:hover { color: #1B76B3; text-decoration: underline; }#meetup_oembed a.mu_button { font-size:14px; -moz-border-radius:3px;-webkit-border-radius:3px;border-radius:3px;border:2px solid #A7241D;color:white!important;text-decoration:none;background-color: #CA3E47; background-image: -moz-linear-gradient(top, #ca3e47, #a8252e); background-image: -webkit-gradient(linear, left bottom, left top, color-stop(0, #a8252e), color-stop(1, #ca3e47));disvplay:inline-block;padding:5px 10px; }#meetup_oembed a.mu_button:hover { color: #fff!important; text-decoration: none; }#meetup_oembed .photo { width:50px; height:50px; overflow:hidden;background:#ccc;float:left;margin:0 5px 0 0;text-align:center;padding:1px; }#meetup_oembed .photo img { height:50px }#meetup_oembed .number { font-size:18px; }#meetup_oembed .thing { text-transform: uppercase; color: #555; }
Putting Deep Learning into Production

Saturday, Jan 21, 2017, 9:30 AM

Capital One
201 3rd St, 5th Floor San Francisco, CA

20 Spark and TensorFlow Experts Attending

RSVPhttps://conf.startup.ml/https://conf.startup.ml/options/reg 15% Off Discount Code BEFORE New Years Eve: FREGLYDateJan 21, 2017, 9:30a – 5pLocationCapital One 201 3rd St, 5th Floor San FranciscoDescriptionDeep learning models are achieving state-of-the-art results in speech, image/video classification and numerous other areas, but …

Check out this Meetup →

 

 

 

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

GNW05 – Extending Databases with Hadoop video (plus GNW06 dates)

Tue, 2016-12-27 18:02

In case you missed this webinar, here’s a 1.5h holiday video about how Gluent “turbocharges” your databases with the power of Hadoop – all this without rewriting your applications :-)

Also, you can already sign up for the next webinar here:

  • GNW06 – Modernizing Enterprise Data Architecture with Gluent, Cloud and Hadoop
  • January 17 @ 12:00pm-1:00pm CST
  • Register here.

See you soon!

 

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

GNW05 – Extending Databases With the Full Power of Hadoop: How Gluent Does It

Tue, 2016-12-13 14:15

It’s time to announce the next webinar in the Gluent New World series. This time I will deliver it myself (and let’s have some fun :-)

Details below:

GNW05 – Extending Databases With the Full Power of Hadoop: How Gluent Does It

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Gluent Podcast with Mark Rittman

Tue, 2016-12-06 07:11

Mark Rittman has been publishing his podcast series (Drill to Detail) for a while now and I sat down with him at UKOUG Tech 2016 conference to discuss Gluent and its place in the new world with him.

This podcast episode is about 49 minutes and it explains the reasons why I decided to go on to build Gluent a couple of years ago and where I see the enterprise data world going in the future.

It’s worth listening to, if you are interested in what we are up to at Gluent and hear Mark’s excellent comments about what he sees going on in the modern enterprise world too!

 

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Dallas Oracle User Group Performance & 12.2 New Features Technical Day

Fri, 2016-10-14 13:29

Just letting people in DFW area know that I’m speaking at the DOUG Performance & Tuning and 12.2 New Features Technical Day!

Time:

  • Thursday 20 October 2016 9:30am-5:30pm

Location: 

  • Courtyard & TownePlace Suites DFW Airport North/Grapevine, TX
    2200 Bass Pro Court|Grapevine, TX 76051 [map]

Speakers (Seven Oracle ACE Directors!):

  • Jim Czuprynski

  • Charles Kim

  • Cary Millsap

  • Dan Morgan

  • Kerry Osborne

  • Tanel Poder

  • Nitin Vengurlekar

Topics:

  • I’ll speak about In-Memory Processing for Databases where I plan to go pretty deep into fundamentals of columnar data structures, CPU & cache-efficient execution and how Oracle’s In-Memory column store does this.
  • There will be plenty of Oracle performance talks and also Oracle Database 12.2 topics.

Sign up & more details:

There will also be free beer in the end! ;-)

 

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Gluent New World #04: Next Generation Oracle Database Architectures using Super-Fast Storage with James Morle

Mon, 2016-06-13 11:30

It’s time to announce the 4th episode of Gluent New World webinar series by James Morle! James is a database/storage visionary and has been actively contributing to Oracle database scene for over 20 years – including his unique book Scaling Oracle 8i that gave a full-stack overview of how different layers of your database platform worked and performed together.

The topic for this webinar is:

When the Rules Change: Next Generation Oracle Database Architectures using Super-Fast Storage

Speaker:

  • James Morle has been working in the high performance database market for 25 years, most of which has been spent working with the Oracle database. After 15 years running Scale Abilities in the UK, he now leads the Oracle Solutions at DSSD/EMC in Menlo Park.

Time:

  • Tue, Jun 21, 2016 12:00 PM – 1:15 PM CDT

Abstract:

  • When enabled with revolutionary storage performance capabilities, it becomes possible to think differently about physical database architecture. Massive consolidation, simplified data architectures, more data agility and reduced management overhead. This presentation, based on the DSSD D5 platform, includes performance and cost comparison with other platforms and shows how extreme performance is not only for extreme workloads.

Register here:

This should be fun! As usual, I’ll be asking some questions myself and the audience can ask questions too. See you soon!

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Gluent New World #03: Real Time Stream Processing in Modern Enterprises with Gwen Shapira

Mon, 2016-05-16 12:01

Update: Added links to video recording and slides below.

It’s time to announce the 3rd episode of Gluent New World webinar series! This time Gwen Shapira will talk about Kafka as a key data infrastructure component of a modern enterprise. And I will ask questions from an old database guy’s viewpoint :)

Apache Kafka and Real Time Stream Processing

Video recording & slides:

Speaker:

  • Gwen Shapira (Confluent)
  • Gwen is a system architect at Confluent helping customers achieve
    success with their Apache Kafka implementation. She has 15 years of
    experience working with code and customers to build scalable data
    architectures, integrating relational and big data technologies. She
    currently specializes in building real-time reliable data processing
    pipelines using Apache Kafka. Gwen is an Oracle Ace Director, an
    author of “Hadoop Application Architectures”, and a frequent presenter
    at industry conferences. Gwen is also a committer on the Apache Kafka
    and Apache Sqoop projects. When Gwen isn’t coding or building data
    pipelines, you can find her pedaling on her bike exploring the roads
    and trails of California, and beyond.

Time:

Abstract:

  • Modern businesses have data at their core, and this data is
    changing continuously. How can we harness this torrent of continuously
    changing data in real time? The answer is stream processing, and one
    system that has become a core hub for streaming data is Apache Kafka.This presentation will give a brief introduction to Apache Kafka and
    describe it’s usage as a platform for streaming data. It will explain
    how Kafka serves as a foundation for both streaming data pipelines and
    applications that consume and process real-time data streams. It will
    introduce some of the newer components of Kafka that help make this
    possible, including Kafka Connect, framework for capturing continuous
    data streams, and Kafka Streams, a lightweight stream processing
    library. Finally it will describe some of our favorite use-cases of
    stream processing and how they solved interesting data scalability
    challenges.

Register:

See you soon!

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Gluent New World #02: SQL-on-Hadoop with Mark Rittman

Thu, 2016-04-07 10:02

Update: The video recording of this session is here:

Slides are here.

Other videos are available at Gluent video collection.

It’s time to announce the 2nd episode of the Gluent New World webinar series!

The Gluent New World webinar series is about modern data management: architectural trends in enterprise IT and technical fundamentals behind them.

GNW02: SQL-on-Hadoop : A bit of History, Current State-of-the-Art, and Looking towards the Future

Speaker:

  • This GNW episode is presented by no other than Mark Rittman, the co-founder & CTO of Rittman Mead and an all-around guru of enterprise BI!

Time:

  • Tue, Apr 19, 2016 12:00 PM – 1:15 PM CDT

Abstract:

Hadoop and NoSQL platforms initially focused on Java developers and slow but massively-scalable MapReduce jobs as an alternative to high-end but limited-scale analytics RDBMS engines. Apache Hive opened-up Hadoop to non-programmers by adding a SQL query engine and relational-style metadata layered over raw HDFS storage, and since then open-source initiatives such as Hive Stinger, Cloudera Impala and Apache Drill along with proprietary solutions from closed-source vendors have extended SQL-on-Hadoop’s capabilities into areas such as low-latency ad-hoc queries, ACID-compliant transactions and schema-less data discovery – at massive scale and with compelling economics.

In this session we’ll focus on technical foundations around SQL-on-Hadoop, first reviewing the basic platform Apache Hive provides and then looking in more detail at how ad-hoc querying, ACID-compliant transactions and data discovery engines work along with more specialised underlying storage that each now work best with – and we’ll take a look to the future to see how SQL querying, data integration and analytics are likely to come together in the next five years to make Hadoop the default platform running mixed old-world/new-world analytics workloads.

Register:

 

If you missed the last GNW01: In-Memory Processing for Databases session, here are the video recordings and slides!

See you soon!

 

 

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Gluent Demo Video Launch

Wed, 2016-03-30 13:58

Although we are still in stealth mode (kind-of), due to the overwhelming requests for information, we decided to publish a video about what we do :)

It’s a short 5-minute video, just click on the image below or go straight to http://gluent.com:

Gluent Demo video

And this, by the way, is just the beginning.

Gluent is getting close to 20 people now, distributed teams in US and UK – and we are still hiring!

 

 

 

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

GNW01: In-Memory Processing for Databases

Mon, 2016-03-28 00:39

Hi, it took a bit longer than I had planned, but here’s the first Gluent New World webinar recording!

You can also subscribe to our new Vimeo channel here – I will announce the next event with another great speaker soon ;-)

A few comments:

  • Slides are here
  • I’ll figure a good way to deal with offline follow-up Q&A later on, after we’ve done a few of these events

If you like this stuff, please share it too – let’s make this series totally awesome!

 

 

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Gluent New World: In-Memory Processing for Databases

Mon, 2016-03-14 14:52

As Gluent is all about gluing together the old world and new world in enterprises, it’s time to announce the Gluent New World webinar series!

The Gluent New World sessions cover the important technical details behind new advancements in enterprise technologies that are arriving into mainstream use.

These seminars help you to stay current with the major technology changes that are inevitably arriving into your company soon (if not already). You can make informed decisions about what to learn next – to still be relevant in your profession also 5 years from now.

Think about software-defined storage, open data formats, cloud processing, in-memory computation, direct attached storage, all-flash and distributed stream processing – and this is just a start!

The speakers of this series are technical experts in their field – able to explain in detail how the technology works internally, which fundamental changes in the technology world have enabled these advancements and why it matters to all of you (not just the Googles and Facebooks out there).

I picked myself as the speaker for the first event in this series:

Gluent New World: In-Memory Processing for Databases

In this session, Tanel Poder will explain how the new CPU cache-efficient data processing methods help to radically speed up data processing workloads – on data stored in RAM and also read from disk! This is a technical session about internal CPU efficiency and cache-friendly data structures, using Oracle Database and Apache Arrow as examples.

Time:

  • Tue, Mar 22, 2016 1:00 PM – 2:00 PM CDT

Register:

After registering, you will receive a confirmation email containing information about joining the webinar.

See you soon!

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

My BIWA Summit Presentations

Tue, 2016-01-26 17:01

Here are the two BIWA Summit 2016 presentations I delivered today. The first one is a collection of high level thoughts (and opinions) of mine and the 2nd one is more technical:

 

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Public Appearances H1 2016

Sat, 2016-01-09 21:53

Here’s where I’ll hang out in the following months:

26-28 January 2016: BIWA Summit 2016 in Redwood Shores, CA

10-11 February 2016: RMOUG Training Days in Denver, CO

25 February 2016: Yorkshire Database (YoDB) in Leeds, UK

6-10 March 2016: Hotsos Symposium, Dallas, TX

10-14 April 2016: IOUG Collaborate, Las Vegas, NV

  • Beer session: Not speaking myself but planning to hang out on a first couple of conference days, drink beer and attend Gluent colleague Maxym Kharchenko‘s presentations

24-26 April 2016: Enkitec E4, Barcelona, Spain

18-19 May 2016: Great Lakes Oracle Conference (GLOC) in Cleveland, OH

  • I plan to submit abstracts (and hope to get some accepted :)
  • The abstract submission is still open until 1st February 2016

2-3 June 2016: AMIS 25 – Beyond the Horizon near Leiden, Netherlands

  • This AMIS 25th anniversary event will take place in a pretty cool location – an old military airport hangar (and abstract submission is still open :)
  • Update: I unfortunately had to cancel my speaking plans at the AMIS event 

5-7 June 2016: Enkitec E4, Dallas, TX

 

As you can see, I have changed my “I don’t want to travel anymore” policy ;-)

 

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Pages