Skip navigation.

Senthil Rajendran

Syndicate content
The voice in my head may not be real , but they have some good ideas !!!Senthilkumar Rajendranhttps://plus.google.com/106215216686965173274noreply@blogger.comBlogger235125
Updated: 1 week 2 days ago

Happy News for Readers - Packt Pub offers $10 Discount on Books on celebrating 10 glorious years

Mon, 2014-06-30 22:47
Would like to pass on the below message to Readers.
Packt Publishing are celebrating 10 glorious years of publishing books. To celebrate this huge milestone, from June 26th Packt is offering all of its eBooks and Videos at just $10 each for 10 days. This promotion covers every title and customers can stock up on as many copies as they like until July 5th.Explore this offer here http://bit.ly/1m1PPqj

Big Data Oracle NoSQL in No Time - It is time to Load Data for a Simple Use Case

Fri, 2014-04-18 22:05
Big Data Oracle NoSQL in No Time - It is time to Load Data for a simple Use Case

Index
Big Data Oracle NoSQL in No Time - Getting Started Part 1
Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2
Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3
Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4
Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5
Big Data Oracle NoSQL in No Time - Smoke Testing Part 6
Big Data Oracle NoSQL in No Time - Increasing Throughput Read/Write Part 7
Big Data Oracle NoSQL in No Time - It is time to Upgrade
Big Data Oracle NoSQL in No Time - It is time to Load Data for a Simple Use Case

There are a lot of reference to NoSQL Use Case but I wanted to make it simple. Though I am not a developer but thanks to my unix scripting skills.

So here is what I am planning to make

  • create a schema for storing server cpu details from mpstat command
  • storing it every minute
  • on 4 nodes
  • then some dashboards
AVRO Schema Design
Here I am creating an avro schema that can hold the date and time with the values from mpstat
cpudata.avsc{ "type": "record", "name": "cpudata", "namespace":"avro", "fields": [ {"name": "yyyy", "type": "int", "default": 0},{"name": "mm", "type": "int", "default": 0}, {"name": "dd", "type": "int", "default": 0}, {"name": "hh", "type": "int", "default": 0}, {"name": "mi", "type": "int", "default": 0}, {"name": "user", "type": "float", "default": 0}, {"name": "nice", "type": "float", "default": 0},{"name": "sys", "type": "float", "default": 0},{"name": "iowait", "type": "float", "default": 0},{"name": "irq", "type": "float", "default": 0},{"name": "soft", "type": "float", "default": 0},{"name": "steal", "type": "float", "default": 0},{"name": "idle", "type": "float", "default": 0},{"name": "intr", "type": "float", "default": 0}] } 
Now I am adding the schema to the store
$ java -jar $KVHOME/lib/kvstore.jar runadmin -host server1 -port 5000kv-> ddl add-schema -file cpudata.avscAdded schema: avro.cpudata.1kv-> show schemaavro.cpudata  ID: 1  Modified: 2014-04-18 00:29:58 UTC, From: server1kv->
To load the data I am creating a shell script which will create the put kv -key command in a temporary file.Later I load the temporary file immediately into the storeThis is automated via a crontab job entry that runs every minute.So this program is going to capture the server cpu metrics every minute. 
$ cat cpuload.shexport KVHOME=$KVBASE/server2/oraclesoftware/kv-3.0.5echo `hostname` `date +"%d-%m-%Y-%H-%M-%S"` `date +"%-d"` `date +"%-m"` `date +"%Y"` `date +"%-H"` `date +"%-M"` `mpstat|tail -1`|awk '{print "put kv -key /cpudata/"$1"/"$2" -value \"{\\\"yyyy\\\":"$5",\\\"mm\\\":"$4",\\\"dd\\\":"$3",\\\"hh\\\":"$6",\\\"mi\\\":"$7",\\\"user\\\":"$10",\\\"nice\\\":"$11",\\\"sys\\\":"$12",\\\"iowait\\\":"$13",\\\"irq\\\":"$14",\\\"soft\\\":"$15",\\\"steal\\\":"$16",\\\"idle\\\":"$17",\\\"intr\\\":"$18" }\" -json avro.cpudata"}' > /tmp/1.loadjava -jar $KVHOME/lib/kvcli.jar -host server1 -port 5000 -store mystore load -file /tmp/1.load$$ crontab -l* * * * * /oraclenosql/work/cpuload.sh$
Since the job has been scheduled , I am testing the records if they are getting loaded
kv-> get kv -key /cpudata -all -keyonly/cpudata/server1/18-04-2014-03-35-02/cpudata/server1/18-04-2014-03-36-022 Keys returned.kv->
Since the program has just started it has two records now
kv-> aggregate -count -key /cpudatacount: 2kv->
A detailed listing of the two records
kv-> get kv -key /cpudata/server1 -all/cpudata/server1/18-04-2014-03-37-02{  "yyyy" : 2014,  "mm" : 4,  "dd" : 18,  "hh" : 3,  "mi" : 37,  "user" : 0.8799999952316284,  "nice" : 1.350000023841858,  "sys" : 0.38999998569488525,  "iowait" : 1.0399999618530273,  "irq" : 0.0,  "soft" : 0.009999999776482582,  "steal" : 0.03999999910593033,  "idle" : 96.30000305175781,  "intr" : 713.0399780273438}/cpudata/server1/18-04-2014-03-35-02{  "yyyy" : 2014,  "mm" : 4,  "dd" : 18,  "hh" : 3,  "mi" : 35,  "user" : 0.8799999952316284,  "nice" : 1.350000023841858,  "sys" : 0.38999998569488525,  "iowait" : 1.0399999618530273,  "irq" : 0.0,  "soft" : 0.009999999776482582,  "steal" : 0.03999999910593033,  "idle" : 96.30000305175781,  "intr" : 713.0399780273438}
Now I am going to sleep and next day going to have some fun. With 24 hours completed the store now has all the CPU metric for the whole day. Let me try some aggregate commands.
Average CPU usage kv-> aggregate -key /cpudata/server1 -avg useravg(user): 0.8799999952316284
kv-> aggregate -key /cpudata/server1 -avg user,nice,sys,iowait,irq,soft,steal,idle,intravg(user): 0.8799999952316284avg(nice): 1.350000023841858avg(sys): 0.38999998569488525avg(iowait): 1.0399999618530273avg(irq): 0.0avg(soft): 0.009999999776482582avg(steal): 0.03999999910593033avg(idle): 96.30000305175781avg(intr): 713.0599822998047kv->
Let me bring a range and see the hourly usage
kv-> aggregate -key /cpudata/server1 -avg user,nice,sys,iowait,irq,soft,steal,idle,intr -start 18-04-2014-04 -end 18-04-2014-05
avg(user): 0.8799999952316284avg(nice): 1.350000023841858avg(sys): 0.38999998569488525avg(iowait): 1.0399999618530273avg(irq): 0.0avg(soft): 0.009999999776482582avg(steal): 0.03999999910593033avg(idle): 96.30000305175781avg(intr): 713.0399780273438kv-> aggregate -key /cpudata/server1 -avg user,nice,sys,iowait,irq,soft,steal,idle,intr -start 18-04-2014-03-35-02 -end 18-04-2014-03-40-02
avg(user): 0.8799999952316284avg(nice): 1.350000023841858avg(sys): 0.38999998569488525avg(iowait): 1.0399999618530273avg(irq): 0.0avg(soft): 0.009999999776482582avg(steal): 0.03999999910593033avg(idle): 96.30000305175781avg(intr): 713.0849914550781kv->

Interesting ?
Time for some dashboards

Hourly CPU Idle Metric 
$ for i in 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24> do> echo "connect store -host server1 -port 5000 -name mystore" > /tmp/1.lst> echo "aggregate -key /cpudata/server1 -avg user,nice,sys,iowait,irq,soft,steal,idle,intr -start 18-04-2014-"$i" -end 18-04-2014-"$i >> /tmp/1.lst> echo "18-04-2014-"$i" - "`java -jar $KVHOME/lib/kvstore.jar runadmin -host server1 -port 5000 load -file /tmp/1.lst|grep -i idle|awk '{print $2 }'`> done18-04-2014-01 - 96.2733306884765618-04-2014-02 - 96.2799987792968818-04-2014-03 - 96.3000030517578118-04-2014-04 - 96.3000030517578118-04-2014-05 - 96.3000030517578118-04-2014-06 - 96.3000030517578118-04-2014-07 - 96.2843330383300818-04-2014-08 - 96.269996643066418-04-2014-09 - 96.269996643066418-04-2014-10 - 96.2733306884765618-04-2014-11 - 96.2799987792968818-04-2014-12 - 96.287000274658218-04-2014-13 - 96.2901676177978518-04-2014-14 - 96.2968357086181618-04-2014-15 - 96.30200195312518-04-2014-16 - 96.3099975585937518-04-2014-17 - 96.3184993743896518-04-2014-18 - 96.3248340606689518-04-2014-19 - 96.3300018310546918-04-2014-20 - 96.333166758219418-04-2014-21 - 96.2813516565271418-04-2014-22 - 96.2733306884765618-04-2014-23 - 96.2799987792968818-04-2014-24 - 96.27333068847656$



Hourly CPU User Metric 
$ for i in 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24> do> echo "connect store -host server1 -port 5000 -name mystore" > /tmp/1.lst> echo "aggregate -key /cpudata/server1 -avg user,nice,sys,iowait,irq,soft,steal,idle,intr -start 18-04-2014-"$i" -end 18-04-2014-"$i >> /tmp/1.lst> echo "18-04-2014-"$i" - "`java -jar $KVHOME/lib/kvstore.jar runadmin -host server1 -port 5000 load -file /tmp/1.lst|grep -i user|awk '{print $2 }'`> done18-04-2014-01 - 0.889999985694885318-04-2014-02 - 0.889999985694885318-04-2014-03 - 0.879999995231628418-04-2014-04 - 0.879999995231628418-04-2014-05 - 0.879999995231628418-04-2014-06 - 0.879999995231628418-04-2014-07 - 0.881999993324279818-04-2014-08 - 0.890666651725769118-04-2014-09 - 0.889999985694885318-04-2014-10 - 0.889999985694885318-04-2014-11 - 0.889999985694885318-04-2014-12 - 0.889999985694885318-04-2014-13 - 0.889999985694885318-04-2014-14 - 0.889999985694885318-04-2014-15 - 0.889999985694885318-04-2014-16 - 0.889999985694885318-04-2014-17 - 0.889999985694885318-04-2014-18 - 0.889999985694885318-04-2014-19 - 0.889999985694885318-04-2014-20 - 0.889999985694885318-04-2014-21 - 0.892127643240259118-04-2014-22 - 0.879999995231628418-04-2014-23 - 0.879999995231628418-04-2014-24 - 0.8899999856948853$





Hourly CPU IOWAIT Metric
$ for i in 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24> do> echo "connect store -host server1 -port 5000 -name mystore" > /tmp/1.lst> echo "aggregate -key /cpudata/server1 -avg user,nice,sys,iowait,irq,soft,steal,idle,intr -start 18-04-2014-"$i" -end 18-04-2014-"$i >> /tmp/1.lst> echo "18-04-2014-"$i" - "`java -jar $KVHOME/lib/kvstore.jar runadmin -host server1 -port 5000 load -file /tmp/1.lst|grep -i iowait|awk '{print $2 }'`> done18-04-2014-01 - 1.090769232847751618-04-2014-02 - 1.049999952316284218-04-2014-03 - 1.039999961853027318-04-2014-04 - 1.039999961853027318-04-2014-05 - 1.037333297729492218-04-2014-06 - 1.029999971389770518-04-2014-07 - 1.040333294868469118-04-2014-08 - 1.049999952316284218-04-2014-09 - 1.049999952316284218-04-2014-10 - 1.049999952316284218-04-2014-11 - 1.049999952316284218-04-2014-12 - 1.049999952316284218-04-2014-13 - 1.048166620731353818-04-2014-14 - 1.049999952316284218-04-2014-15 - 1.044999957084655818-04-2014-16 - 1.039999961853027318-04-2014-17 - 1.039999961853027318-04-2014-18 - 1.039999961853027318-04-2014-19 - 1.039999961853027318-04-2014-20 - 1.039833295345306418-04-2014-21 - 1.090769232847751618-04-2014-22 - 1.049999952316284218-04-2014-23 - 1.090769232847751618-04-2014-24 - 1.0499999523162842$


So this NoSQL Use Case is very simple. I have scheduled the jobs to run on another couple of servers so that my store can be used to analyze CPU metric for all my hosted servers. The avro schema can be expanded to have many more information.

Big Data Oracle NoSQL in No Time - It is time to Upgrade

Mon, 2014-04-14 03:54
Big Data Oracle NoSQL in No Time - It is time to Upgrade 
Oracle NoSQL upgrade from 11gR2 to 12cR1 ( 2.0 to 3.0 )

Index
Big Data Oracle NoSQL in No Time - Getting Started Part 1
Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2
Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3
Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4
Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5
Big Data Oracle NoSQL in No Time - Smoke Testing Part 6
Big Data Oracle NoSQL in No Time - Increasing Throughput Read/Write Part 7
Big Data Oracle NoSQL in No Time - It is time to Upgrade
Big Data Oracle NoSQL in No Time - It is time to Load Data for a Simple Use Case

The upgrade is simple , nosql is brilliant with its simplicity.














The below are the steps

  • verify prerequisite - here we verify that the storage nodes are meeting the required prerequisite for upgrading.
  • show upgrade-order - here we get the list of storage nodes in order that can be upgraded
  • replace the software - unzip the new software
  • verify upgrade - we verify if the storage nodes are upgraded to the version that we downloaded.
In our scenario we have 4x4 deployment topology with one admin node and here we will upgrade from 11gR2 to 12cR1First let us upgrade on of the admin node.

$ export KVHOME=$KVBASE/server1/oraclesoftware/kv-2.0.39$ java -jar $KVHOME/lib/kvstore.jar stop -root $KVBASE/server1/storage$ cd $KVBASE/server1/oraclesoftware$ cp -Rf $KVBASE/stage/kv-3.0.5 .$ export KVHOME=$KVBASE/server1/oraclesoftware/kv-3.0.5$ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVBASE/server1/storage &$ nohup: appending output to `nohup.out'$ java -jar $KVHOME/lib/kvstore.jar runadmin -port 5000 -host server1kv-> verify prerequisiteVerify: starting verification of mystore based upon topology sequence #8430 partitions and 4 storage nodes. Version: 12.1.3.0.5 Time: 2014-04-14 08:33:50 UTCSee server1:$KVBASE/server1/storage/mystore/log/mystore_{0..N}.log for progress messagesVerify prerequisite: Storage Node [sn3] on server3:5200    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 11gR2.2.0.39 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn4] on server4:5300    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 11gR2.2.0.39 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn1] on server1:5000    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn2] on server2:5100    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 11gR2.2.0.39 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verification complete, no violations.kv->kv-> show upgrade-orderCalculating upgrade order, target version: 12.1.3.0.5, prerequisite: 11.2.2.0.23sn3sn4sn2kv->
In our case the upgrade order is determined to be sn3,sn4 and then sn2. We can verify the upgrade order at each stage.
Now let us upgrade SN3
$ export KVHOME=$KVBASE/server3/oraclesoftware/kv-2.0.39$ java -jar $KVHOME/lib/kvstore.jar stop -root $KVBASE/server3/storage$$ cd $KVBASE/server3/oraclesoftware$ cp -Rf $KVBASE/stage/kv-3.0.5 .$ export KVHOME=$KVBASE/server3/oraclesoftware/kv-3.0.5$ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVBASE/server3/storage &$
kv->  verify prerequisiteVerify: starting verification of mystore based upon topology sequence #8430 partitions and 4 storage nodes. Version: 12.1.3.0.5 Time: 2014-04-14 08:40:31 UTCSee server1:$KVBASE/server1/storage/mystore/log/mystore_{0..N}.log for progress messagesVerify prerequisite: Storage Node [sn3] on server3:5200    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn4] on server4:5300    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 11gR2.2.0.39 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn1] on server1:5000    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn2] on server2:5100    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 11gR2.2.0.39 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verification complete, no violations.kv->kv-> show upgrade-orderCalculating upgrade order, target version: 12.1.3.0.5, prerequisite: 11.2.2.0.23sn4sn2
kv->

Now let us upgrade SN4
$  export KVHOME=$KVBASE/server4/oraclesoftware/kv-2.0.39$ java -jar $KVHOME/lib/kvstore.jar stop -root $KVBASE/server4/storage$$ cd $KVBASE/server4/oraclesoftware$ cp -Rf $KVBASE/stage/kv-3.0.5 .$ export KVHOME=$KVBASE/server4/oraclesoftware/kv-3.0.5$ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVBASE/server4/storage &$
kv-> verify prerequisiteVerify: starting verification of mystore based upon topology sequence #8430 partitions and 4 storage nodes. Version: 12.1.3.0.5 Time: 2014-04-14 08:42:30 UTCSee server1:$KVBASE/server1/storage/mystore/log/mystore_{0..N}.log for progress messagesVerify prerequisite: Storage Node [sn3] on server3:5200    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn4] on server4:5300    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn1] on server1:5000    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn2] on server2:5100    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 11gR2.2.0.39 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verification complete, no violations.kv->
kv-> show upgrade-orderCalculating upgrade order, target version: 12.1.3.0.5, prerequisite: 11.2.2.0.23sn2
kv->
Now let us upgrade the last pending storage node SN2
$ export KVHOME=$KVBASE/server2/oraclesoftware/kv-2.0.39$ java -jar $KVHOME/lib/kvstore.jar stop -root $KVBASE/server2/storage$$ cd $KVBASE/server2/oraclesoftware$ cp -Rf $KVBASE/stage/kv-3.0.5 .$ export KVHOME=$KVBASE/server2/oraclesoftware/kv-3.0.5$ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVBASE/server2/storage &$
kv-> verify prerequisiteVerify: starting verification of mystore based upon topology sequence #8430 partitions and 4 storage nodes. Version: 12.1.3.0.5 Time: 2014-04-14 08:44:12 UTCSee server1:$KVBASE/server1/storage/mystore/log/mystore_{0..N}.log for progress messagesVerify prerequisite: Storage Node [sn3] on server3:5200    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn4] on server4:5300    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn1] on server1:5000    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn2] on server2:5100    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verification complete, no violations.kv->
kv-> show upgrade-orderCalculating upgrade order, target version: 12.1.3.0.5, prerequisite: 11.2.2.0.23There are no nodes that need to be upgradedkv->
Let us quickly verify the upgrade process
kv-> verify upgradeVerify: starting verification of mystore based upon topology sequence #8430 partitions and 4 storage nodes. Version: 12.1.3.0.5 Time: 2014-04-14 08:44:27 UTCSee server1:$KVBASE/server1/storage/mystore/log/mystore_{0..N}.log for progress messagesVerify upgrade: Storage Node [sn3] on server3:5200    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify upgrade: Storage Node [sn4] on server4:5300    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify upgrade: Storage Node [sn1] on server1:5000    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify upgrade: Storage Node [sn2] on server2:5100    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407
Verification complete, no violations.kv->

As a Oracle DBA I know the complexity in upgrade but upgrading NoSQL is different.

Big Data Oracle NoSQL in No Time - Increasing Throughput Read/Write Part 7

Fri, 2014-04-11 04:12
Big Data Oracle NoSQL in No Time - Increasing Throughput Read/Write Part 7

Index
Big Data Oracle NoSQL in No Time - Getting Started Part 1
Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2
Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3
Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4
Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5
Big Data Oracle NoSQL in No Time - Smoke Testing Part 6
Big Data Oracle NoSQL in No Time - Increasing Throughput Read/Write Part 7
Big Data Oracle NoSQL in No Time - It is time to Upgrade
Big Data Oracle NoSQL in No Time - It is time to Load Data for a Simple Use Case

Let us expand our environment.
If your NoSQL store has write bottleneck then adding a storage node would help.
If your NoSQL store had read bottlenech then increasing the replication factor would help.

Steps to make 3x4 (to increase the write throughput)

kv-> plan deploy-sn -dc dc1 -port 5300 -wait -host server4
kv-> plan change-parameters -service sn4 -wait -params capacity=3
kv-> topology clone -current -name 3x4
kv-> topology change-repfactor -name 3x4 -pool AllStorageNodes -rf 4 -dc dc1
kv-> topology preview -name 3x4
kv-> plan deploy-topology -name 3x4 -wait



Steps to make 4x4 (to increase the read throughput)

kv-> plan change-parameters -service sn1 -wait -params capacity=4
kv-> plan change-parameters -service sn2 -wait -params capacity=4
kv-> plan change-parameters -service sn3 -wait -params capacity=4
kv-> plan change-parameters -service sn4 -wait -params capacity=4
kv-> topology clone -current -name 4x4
kv-> topology redistribute -name 4x4 -pool AllStorageNodes
kv-> topology preview -name 4x4
kv-> plan deploy-topology -name 4x4 -wait



Big Data Oracle NoSQL in No Time - Smoke Testing Part 6

Fri, 2014-04-11 00:07
Big Data Oracle NoSQL in No Time - Smoke Testing Part 6

Index
Big Data Oracle NoSQL in No Time - Getting Started Part 1
Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2
Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3
Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4
Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5
Big Data Oracle NoSQL in No Time - Smoke Testing Part 6
Big Data Oracle NoSQL in No Time - Increasing Throughput Read/Write Part 7
Big Data Oracle NoSQL in No Time - It is time to Upgrade
Big Data Oracle NoSQL in No Time - It is time to Load Data for a Simple Use Case

Oracle NoSQL can be smoke tested in different ways but the most common one is the ping command and a simple java program.
Customers can design their own somke testing program as needed.

Let us compile what is in the documentation
$ export KVBASE=/oraclenosql/lab
$ export KVHOME=$KVBASE/server1/oraclesoftware/kv-2.0.39
$ cd $KVHOME
$ javac -cp lib/kvclient.jar:examples examples/hello/*.java
$ java -cp $KVHOME/lib/kvclient.jar:$KVHOME/examples hello.HelloBigDataWorld -port 5000 -host server1 -store mystore
Hello Big Data World!
$

With all the three storage nodes up and running the below is the output of ping command and the java program

$ export KVHOME=$KVBASE/server1/oraclesoftware/kv-2.0.39
$ java -jar $KVHOME/lib/kvstore.jar ping -port 5000 -host server1
Pinging components of store mystore based upon topology sequence #67
mystore comprises 30 partitions and 3 Storage Nodes
Storage Node [sn1] on server1:5000    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
        Rep Node [rg1-rn1]      Status: RUNNING,MASTER at sequence number: 255 haPort: 5011
        Rep Node [rg3-rn2]      Status: RUNNING,REPLICA at sequence number: 135 haPort: 5013
        Rep Node [rg2-rn2]      Status: RUNNING,REPLICA at sequence number: 135 haPort: 5012
Storage Node [sn2] on server2:5100    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
        Rep Node [rg3-rn3]      Status: RUNNING,REPLICA at sequence number: 135 haPort: 5112
        Rep Node [rg1-rn2]      Status: RUNNING,REPLICA at sequence number: 255 haPort: 5111
        Rep Node [rg2-rn1]      Status: RUNNING,MASTER at sequence number: 135 haPort: 5110
Storage Node [sn3] on server3:5200    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
        Rep Node [rg3-rn1]      Status: RUNNING,MASTER at sequence number: 135 haPort: 5210
        Rep Node [rg2-rn3]      Status: RUNNING,REPLICA at sequence number: 135 haPort: 5212
        Rep Node [rg1-rn3]      Status: RUNNING,REPLICA at sequence number: 255 haPort: 5211
$
$ java -cp $KVHOME/lib/kvclient.jar:$KVHOME/examples hello.HelloBigDataWorld -port 5000 -host server1 -store mystore
Hello Big Data World!
$


Let us take down the third storage node. You will see the ping confirming that the third storage node is unreachable and the java program works fine with the storage nodes.

$ export KVHOME=$KVBASE//server3/oraclesoftware/kv-2.0.39
$ java -jar $KVHOME/lib/kvstore.jar stop -root $KVBASE//server3/storage
$
$ export KVHOME=$KVBASE//server1/oraclesoftware/kv-2.0.39
$ java -jar $KVHOME/lib/kvstore.jar ping -port 5000 -host server1
Pinging components of store mystore based upon topology sequence #67
mystore comprises 30 partitions and 3 Storage Nodes
Storage Node [sn1] on server1:5000    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
        Rep Node [rg1-rn1]      Status: RUNNING,MASTER at sequence number: 255 haPort: 5011
        Rep Node [rg3-rn2]      Status: RUNNING,REPLICA at sequence number: 137 haPort: 5013
        Rep Node [rg2-rn2]      Status: RUNNING,REPLICA at sequence number: 135 haPort: 5012
Storage Node [sn2] on server2:5100    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
        Rep Node [rg3-rn3]      Status: RUNNING,MASTER at sequence number: 137 haPort: 5112
        Rep Node [rg1-rn2]      Status: RUNNING,REPLICA at sequence number: 255 haPort: 5111
        Rep Node [rg2-rn1]      Status: RUNNING,MASTER at sequence number: 135 haPort: 5110
Storage Node [sn3] on server3:5200    Datacenter: datacenter1 [dc1] UNREACHABLE
        Rep Node [rg3-rn1]      Status: UNREACHABLE
        Rep Node [rg2-rn3]      Status: UNREACHABLE
        Rep Node [rg1-rn3]      Status: UNREACHABLE
$
$ java -cp $KVHOME/lib/kvclient.jar:$KVHOME/examples hello.HelloBigDataWorld -port 5000 -host server1 -store mystore
Hello Big Data World!
$

Let us take down the second storage node. With this we are up and running with one storage node and two are down.
It is very clear from the java program that the nosql store is not functional because the default commit policy is simple majority which requires two replicas.

$ export KVHOME=$KVBASE//server2/oraclesoftware/kv-2.0.39
$ java -jar $KVHOME/lib/kvstore.jar stop -root $KVBASE//server2/storage
$
$ export KVHOME=$KVBASE//server1/oraclesoftware/kv-2.0.39
$ java -jar $KVHOME/lib/kvstore.jar ping -port 5000 -host server1
Pinging components of store mystore based upon topology sequence #67
mystore comprises 30 partitions and 3 Storage Nodes
Storage Node [sn1] on server1:5000    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
        Rep Node [rg1-rn1]      Status: RUNNING,MASTER at sequence number: 257 haPort: 5011
        Rep Node [rg3-rn2]      Status: RUNNING,UNKNOWN at sequence number: 137 haPort: 5013
        Rep Node [rg2-rn2]      Status: RUNNING,UNKNOWN at sequence number: 135 haPort: 5012
Storage Node [sn2] on server2:5100    Datacenter: datacenter1 [dc1] UNREACHABLE
        Rep Node [rg3-rn3]      Status: UNREACHABLE
        Rep Node [rg1-rn2]      Status: UNREACHABLE
        Rep Node [rg2-rn1]      Status: UNREACHABLE
Storage Node [sn3] on server3:5200    Datacenter: datacenter1 [dc1] UNREACHABLE
        Rep Node [rg3-rn1]      Status: UNREACHABLE
        Rep Node [rg2-rn3]      Status: UNREACHABLE
        Rep Node [rg1-rn3]      Status: UNREACHABLE
$
$ java -cp $KVHOME/lib/kvclient.jar:$KVHOME/examples hello.HelloBigDataWorld -port 5000 -host server1 -store mystore
oracle.kv.DurabilityException: (JE 5.0.74) Commit policy: SIMPLE_MAJORITY required 2 replicas. But none were active with this master. (11.2.2.0.39)
Fault class name: com.sleepycat.je.rep.InsufficientReplicasException
Remote stack trace: com.sleepycat.je.rep.InsufficientReplicasException: (JE 5.0.74) Commit policy: SIMPLE_MAJORITY required 2 replicas. But none were active with this master.
$

By bring up storage nodes 2 & 3 our store is operational.

$ export KVHOME=$KVBASE//server3/oraclesoftware/kv-2.0.39
$ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVBASE//server3/storage &
$ export KVHOME=$KVBASE//server2/oraclesoftware/kv-2.0.39
$ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVBASE//server2/storage &

$ java -jar $KVHOME/lib/kvstore.jar ping -port 5000 -host server1
Pinging components of store mystore based upon topology sequence #67
mystore comprises 30 partitions and 3 Storage Nodes
Storage Node [sn1] on server1:5000    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
        Rep Node [rg1-rn1]      Status: RUNNING,REPLICA at sequence number: 265 haPort: 5011
        Rep Node [rg3-rn2]      Status: RUNNING,MASTER at sequence number: 141 haPort: 5013
        Rep Node [rg2-rn2]      Status: RUNNING,REPLICA at sequence number: 141 haPort: 5012
Storage Node [sn2] on server2:5100    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
        Rep Node [rg3-rn3]      Status: RUNNING,REPLICA at sequence number: 141 haPort: 5112
        Rep Node [rg1-rn2]      Status: RUNNING,REPLICA at sequence number: 265 haPort: 5111
        Rep Node [rg2-rn1]      Status: RUNNING,REPLICA at sequence number: 141 haPort: 5110
Storage Node [sn3] on server3:5200    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
        Rep Node [rg3-rn1]      Status: RUNNING,REPLICA at sequence number: 141 haPort: 5210
        Rep Node [rg2-rn3]      Status: RUNNING,MASTER at sequence number: 141 haPort: 5212
        Rep Node [rg1-rn3]      Status: RUNNING,MASTER at sequence number: 265 haPort: 5211
$

$ export KVHOME=$KVBASE//server1/oraclesoftware/kv-2.0.39
$ java -cp $KVHOME/lib/kvclient.jar:$KVHOME/examples hello.HelloBigDataWorld -port 5000 -host server1 -store mystore
Hello Big Data World!
$

Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5

Thu, 2014-04-10 04:23
Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5
Index
Big Data Oracle NoSQL in No Time - Getting Started Part 1
Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2
Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3
Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4
Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5
Big Data Oracle NoSQL in No Time - Smoke Testing Part 6
Big Data Oracle NoSQL in No Time - Increasing Throughput Read/Write Part 7
Big Data Oracle NoSQL in No Time - It is time to Upgrade
Big Data Oracle NoSQL in No Time - It is time to Load Data for a Simple Use Case

With the current 3x1 setup the NoSQL store is write efficient. In order to make it read efficient the replication factor has to be increased which internally creates more copies of the data to improve performance.

In the below scenario we are going to increase the replication from 1 to 3 to the  existing topology to make it read friendly.


export KVHOME=$KVBASE/server1/oraclesoftware/kv-2.0.39
java -jar $KVHOME/lib/kvstore.jar runadmin -port 5000 -host server1
kv-> show topologystore=mystore  numPartitions=30 sequence=60  dc=[dc1] name=datacenter1 repFactor=1
  sn=[sn1]  dc=dc1 server1:5000 capacity=1 RUNNING    [rg1-rn1] RUNNING          No performance info available  sn=[sn2]  dc=dc1 server2:5100 capacity=1 RUNNING    [rg2-rn1] RUNNING          No performance info available  sn=[sn3]  dc=dc1 server3:5200 capacity=1 RUNNING    [rg3-rn1] RUNNING          No performance info available
  shard=[rg1] num partitions=10    [rg1-rn1] sn=sn1  shard=[rg2] num partitions=10    [rg2-rn1] sn=sn2  shard=[rg3] num partitions=10    [rg3-rn1] sn=sn3
kv-> plan change-parameters -service sn1 -wait -params capacity=3Executed plan 8, waiting for completion...Plan 8 ended successfullykv-> plan change-parameters -service sn2 -wait -params capacity=3Executed plan 9, waiting for completion...Plan 9 ended successfullykv-> plan change-parameters -service sn3 -wait -params capacity=3Executed plan 10, waiting for completion...Plan 10 ended successfullykv-> topology clone -current -name 3x3Created 3x3kv-> topology change-repfactor -name 3x3 -pool AllStorageNodes -rf 3 -dc dc1Changed replication factor in 3x3kv-> topology preview -name 3x3Topology transformation from current deployed topology to 3x3:Create 6 RNs
shard rg1  2 new RNs : rg1-rn2 rg1-rn3shard rg2  2 new RNs : rg2-rn2 rg2-rn3shard rg3  2 new RNs : rg3-rn2 rg3-rn3
kv-> plan deploy-topology -name 3x3 -waitExecuted plan 11, waiting for completion...Plan 11 ended successfullykv-> show topologystore=mystore  numPartitions=30 sequence=67  dc=[dc1] name=datacenter1 repFactor=3
  sn=[sn1]  dc=dc1 server1:5000 capacity=3 RUNNING    [rg1-rn1] RUNNING          No performance info available    [rg2-rn2] RUNNING          No performance info available    [rg3-rn2] RUNNING          No performance info available  sn=[sn2]  dc=dc1 server2:5100 capacity=3 RUNNING    [rg1-rn2] RUNNING          No performance info available    [rg2-rn1] RUNNING          No performance info available    [rg3-rn3] RUNNING          No performance info available  sn=[sn3]  dc=dc1 server3:5200 capacity=3 RUNNING    [rg1-rn3] RUNNING          No performance info available    [rg2-rn3] RUNNING          No performance info available    [rg3-rn1] RUNNING          No performance info available
  shard=[rg1] num partitions=10    [rg1-rn1] sn=sn1    [rg1-rn2] sn=sn2    [rg1-rn3] sn=sn3  shard=[rg2] num partitions=10    [rg2-rn1] sn=sn2    [rg2-rn2] sn=sn1    [rg2-rn3] sn=sn3  shard=[rg3] num partitions=10    [rg3-rn1] sn=sn3    [rg3-rn2] sn=sn1    [rg3-rn3] sn=sn2
kv->


So what we have done ?

plan change-parameters -service sn1 -wait -params capacity=3plan change-parameters -service sn2 -wait -params capacity=3plan change-parameters -service sn3 -wait -params capacity=3We are increasing the capacity from 1 to 3 with the change-parameters command.
topology clone -current -name 3x3We are cloning the current topology with the new name 3x3
topology change-repfactor -name 3x3 -pool AllStorageNodes -rf 3 -dc dc1We are using the change-repfactor method to modify the replication factor to 3. The replication factor cannot be changed for this topology after executing this command.
You can use the show topology command to verify if the storage nodes are up and running. Alternatively , use the web interface to check the storage nodes 3x3 distributions.

Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4

Thu, 2014-04-10 02:39
Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4
Index
Big Data Oracle NoSQL in No Time - Getting Started Part 1
Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2
Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3
Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4
Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5
Big Data Oracle NoSQL in No Time - Smoke Testing Part 6
Big Data Oracle NoSQL in No Time - Increasing Throughput Read/Write Part 7
Big Data Oracle NoSQL in No Time - It is time to Upgrade
Big Data Oracle NoSQL in No Time - It is time to Load Data for a Simple Use Case

Previously we setup 1x1 topology and now we are going to move into a 3x1 topology.
By doing so we are going to increase the data that is distributed in the NoSQL Store. The main advantage of doing so will increase the write throughput and this is achieved using the redistribute command. During the redistribution partitions are distributed across the new shards and the end result is you have more replication nodes that will help your write operations.

In the below scenario we are going to add two replication nodes to the existing topology to make it write friendly.

$ export KVBASE=/oraclenosql/lab
$ export KVHOME=$KVBASE/server1/oraclesoftware/kv-2.0.39
$ java -jar $KVHOME/lib/kvstore.jar runadmin -port 5000 -host server1
kv-> plan deploy-sn -dc dc1 -port 5100 -wait -host server2
Executed plan 5, waiting for completion...
Plan 5 ended successfully
kv-> plan deploy-sn -dc dc1 -port 5200 -wait -host server3
Executed plan 6, waiting for completion...
Plan 6 ended successfully
kv-> show topology
store=mystore  numPartitions=30 sequence=36
  dc=[dc1] name=datacenter1 repFactor=1

  sn=[sn1]  dc=dc1 server1:5000 capacity=1 RUNNING
    [rg1-rn1] RUNNING
          No performance info available
  sn=[sn2]  dc=dc1 server2:5100 capacity=1 RUNNING
  sn=[sn3]  dc=dc1 server3:5200 capacity=1 RUNNING

  shard=[rg1] num partitions=30
    [rg1-rn1] sn=sn1

kv->
kv-> topology clone -current -name 3x1
Created 3x1
kv-> topology redistribute -name 3x1 -pool AllStorageNodes
Redistributed: 3x1
kv-> topology preview -name 3x1
Topology transformation from current deployed topology to 3x1:
Create 2 shards
Create 2 RNs
Migrate 20 partitions

shard rg2
  1 new RN : rg2-rn1
  10 partition migrations
shard rg3
  1 new RN : rg3-rn1
  10 partition migrations

kv-> plan deploy-topology -name 3x1 -wait
Executed plan 7, waiting for completion...
Plan 7 ended successfully
kv-> show topology
store=mystore  numPartitions=30 sequence=60
  dc=[dc1] name=datacenter1 repFactor=1

  sn=[sn1]  dc=dc1 server1:5000 capacity=1 RUNNING
    [rg1-rn1] RUNNING
          No performance info available
  sn=[sn2]  dc=dc1 server2:5100 capacity=1 RUNNING
    [rg2-rn1] RUNNING
          No performance info available
  sn=[sn3]  dc=dc1 server3:5200 capacity=1 RUNNING
    [rg3-rn1] RUNNING
          No performance info available

  shard=[rg1] num partitions=10
    [rg1-rn1] sn=sn1
  shard=[rg2] num partitions=10
    [rg2-rn1] sn=sn2
  shard=[rg3] num partitions=10
    [rg3-rn1] sn=sn3

kv->



So what we have done ?

plan deploy-sn -dc dc1 -port 5100 -wait -host server2
We are adding the second storage node into the datacenter dc1 which already has one storage node.

plan deploy-sn -dc dc1 -port 5200 -wait -host server3
We are adding one more storage node into the datacenter dc1 making it three storage nodes.

topology clone -current -name 3x1
We are cloning the existing 1x1 topology to a new candidate topology 3x1. This topology will be used for the change operations that is planned to be performed.

topology redistribute -name 3x1 -pool AllStorageNodes
We are redistributing the partitions on to the 3x1 topology

topology preview -name 3x1
We can preview the topology before deploying it to the store.

plan deploy-topology -name 3x1 -wait
We are approving the deployment plan 3x1 and the deployment will take time to complete as it depends on the store size.

You can use the show topology command to verify if the storage nodes are up and running. Alternatively , use the web interface to check the storage nodes 3x1 distributions.