Skip navigation.

Pas Apicella

Syndicate content
Information on Pivotal -> Product Family - GemFire, SQLFire/GemFireXD, tc Server, RabbitMQ, Pivotal HD, Greenplum DB, Cloud Foundry and SpringPas Apicellahttp://www.blogger.com/profile/09389663166398991762noreply@blogger.comBlogger279125
Updated: 2 days 4 hours ago

Using the tc Server build pack for Pivotal Cloud Foundry 1.3

Wed, 2014-10-22 19:09
On Pivotal network you will find various build packs you can download and apply to PCF and use for your applications outside of the shipped build packs, using the link below.

https://network.pivotal.io/products/pivotal-cf

I am going to show how you would take one of these build packs , install it and then consume it from an application. In this demo I am going to use "tc server buildpack (offline) v2.4"

1. Log in as admin user and upload the build pack as shown below. I am adding this build pack in the last position which is position 6.

[Tue Oct 21 20:36:01 papicella@:~/cf/buildpacks ] $ cf create-buildpack tc_server_buildpack_offline tc-server-buildpack-offline-v2.4.zip 6
Creating buildpack tc_server_buildpack_offline...
OK

Uploading buildpack tc_server_buildpack_offline...
OK

2. View buildpacks, which should show the one we just uploaded above.

[Thu Oct 23 11:15:18 papicella@:~/cf/APJ1 ] $ cf buildpacks
Getting buildpacks...

buildpack                     position   enabled   locked   filename
java_buildpack_offline        1          true      false    java-buildpack-offline-v2.4.zip
ruby_buildpack                2          true      false    ruby_buildpack-offline-v1.1.0.zip
nodejs_buildpack              3          true      false    nodejs_buildpack-offline-v1.0.1.zip
python_buildpack              4          true      false    python_buildpack-offline-v1.0.1.zip
go_buildpack                  4          true      false    go_buildpack-offline-v1.0.1.zip
php_buildpack                 5          true      false    php_buildpack-offline-v1.0.1.zip
tc_server_buildpack_offline   6          true      false    tc-server-buildpack-offline-v2.4.zip

3. Push application using your buildpack uploaded above, below is a simple manifest which refers to the build pack I uploaded.

manifest.yml

applications:
- name: pcfhawq
  memory: 512M
  instances: 1
  host: pcfhawq
  domain: yyyy.fe.dddd.com
  path: ./pcfhawq.war
  buildpack: tc_server_buildpack_offline
  services:
   - phd-dev

[Thu Oct 23 11:36:26 papicella@:~/cf/buildpacks ] $ cf push -f manifest-apj1.yml
Using manifest file manifest-apj1.yml

Creating app pcfhawq-web in org pas-org / space apple as pas...
OK

Creating route yyyy.apj1.dddd.gopivotal.com...
OK

Binding pcfhawq-web.yyyy.fe.dddd.com to pcfhawq-web...
OK

Uploading pcfhawq-web...
Uploading app files from: pcfhawq.war
Uploading 644.1K, 181 files
OK
Binding service phd-dev to app pcfhawq-web in org pas-org / space apple as pas...
OK

Starting app pcfhawq-web in org pas-org / space apple as pas...
OK
-----> Downloaded app package (5.6M)
-----> Java Buildpack Version: v2.4 (offline) | https://github.com/pivotal-cf/tc-server-buildpack.git#396ad0a
-----> Downloading Open Jdk JRE 1.7.0_60 from http://download.run.pivotal.io/openjdk/lucid/x86_64/openjdk-1.7.0_60.tar.gz (found in cache)
       Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (1.3s)
-----> Downloading Spring Auto Reconfiguration 1.4.0_RELEASE from http://download.run.pivotal.io/auto-reconfiguration/auto-reconfiguration-1.4.0_RELEASE.jar (found in cache)
       Modifying /WEB-INF/web.xml for Auto Reconfiguration
-----> Downloading Tc Server Instance 2.9.6_RELEASE from http://download.run.pivotal.io/tc-server/tc-server-2.9.6_RELEASE.tar.gz (found in cache)
       Instantiating tc Server in .java-buildpack/tc_server (3.4s)
-----> Downloading Tc Server Lifecycle Support 2.2.0_RELEASE from http://download.run.pivotal.io/tomcat-lifecycle-support/tomcat-lifecycle-support-2.2.0_RELEASE.jar (found in cache)
-----> Downloading Tc Server Logging Support 2.2.0_RELEASE from http://download.run.pivotal.io/tomcat-logging-support/tomcat-logging-support-2.2.0_RELEASE.jar (found in cache)
-----> Downloading Tc Server Access Logging Support 2.2.0_RELEASE from http://download.run.pivotal.io/tomcat-access-logging-support/tomcat-access-logging-support-2.2.0_RELEASE.jar (found in cache)
-----> Uploading droplet (45M)

1 of 1 instances running

App started

Showing health and status for app pcfhawq-web in org pas-org / space apple as pas...
OK

requested state: started
instances: 1/1
usage: 1G x 1 instances
urls: pcfhawq-web.yyyy.fe.dddd.com

     state     since                    cpu    memory         disk
#0   running   2014-10-23 11:37:56 AM   0.0%   398.6M of 1G   109.2M of 1G

4. Verify within the DEV console the application is using the build pack you targeted.


More Information

Buildpacks
http://docs.pivotal.io/pivotalcf/buildpacks/index.htmlhttp://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Connecting to Pivotal Cloud Foundry Ops Metrics using Java VisualVM

Sun, 2014-10-19 21:34
The Pivotal Ops Metrics tool is a JMX extension for Elastic Runtime. Pivotal Ops Metrics collects and exposes system data from Cloud Foundry components via a JMX endpoint. Use this system data to monitor your installation and assist in troubleshooting. Below is the tile once installed and available with Pivotal Cloud Foundry Ops Manager


Once installed and configured, metrics for Cloud Foundry components automatically report to the JMX endpoint. Your JMX client uses the credentials supplied to connect to the IP address of the Pivotal Ops Metrics JMX Provider at port 44444

1. Start jvisualvm

2. Under plugin ensure you have the VisualVm-Mbeans plugin installed as shown below, or install it to be able to view the MBeans.



3. Create a JMX connection as shown below


4. Finally the CF MBeans can be viewed as shown below.



More Information

Deploying Pivotal Ops Metrics
http://docs.pivotal.io/pivotalcf/customizing/deploy-metrics.htmlhttp://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

SQLShell accessing Pivotal GemFire XD 1.3

Sun, 2014-10-19 19:09
I stumbled open SQLShell recently as per the URL below. Below I will show how you can connect to Pivotal GemFireXD using SQLShell. I used this to export query results using CSV output.

http://software.clapper.org/sqlshell/users-guide.html

Note: Assuming SQLShell is already installed and instructions below are for Mac OSX

1. Create a file in $HOME/.sqlshell/config as shown below, I just took the sample it ships with. Notice how I have added an alias for "gemfirexd", highlighted below.

# ---------------------------------------------------------------------------
# initialization file for SQLShell

[settings]
#colspacing: 2

[drivers]
# Driver aliases.
postgresql = org.postgresql.Driver
postgres = org.postgresql.Driver
mysql = com.mysql.jdbc.Driver
sqlite = org.sqlite.JDBC
sqlite3 = org.sqlite.JDBC
oracle = oracle.jdbc.driver.OracleDriver
access = sun.jdbc.odbc.JdbcOdbcDriver
gemfirexd = com.pivotal.gemfirexd.jdbc.ClientDriver

[vars]
historyDir: ${env.HOME}/.sqlshell

[db_postgres]
aliases: post
url: jdbc:postgresql://localhost:5432/sampledb
driver: postgres
user: ${system.user.name}
password:
history: $vars.historyDir/postgres.hist

[db_mysql]
#aliases:
driver: mysql
url: jdbc:mysql://localhost:3306/sampledb
user: ${system.user.name}
password:
history: $vars.historyDir/mysql.hist

[db_sqlite3]
aliases: sqlite3
url: jdbc:sqlite:/tmp/sample.db
driver: sqlite
history: $vars.historyDir/sqlite3.hist

[db_oracle]
aliases: ora
schema: example
url: jdbc:oracle:thin:@localhost:1521:sampledb
user: ${system.user.name}
password:
driver: oracle
history: $vars.historyDir/scrgskd

[db_access]
driver: access
url: jdbc:odbc:Driver={Microsoft Access Driver (*.mdb)};DBQ=/tmp/sample.mdb;DriverID=22}

2. Add Pivotal GemFireXd client driver "gemfirexd-client.jar" to "/Applications/sqlshell/lib"

3. With Pivotal GemFireXD cluster up and running connect and run some commands as shown below.
  
[Mon Oct 20 11:56:10 papicella@:~/vmware/software/sqlshell ] $ sqlshell gemfirexd,jdbc:gemfirexd://localhost:1527
SQLShell, version 0.8.1 (2012/03/16 09:43:31)
Copyright (c) 2009-2011 Brian M. Clapper
Using JLine
Type "help" for help. Type ".about" for more information.

sqlshell> .set schema APP

sqlshell> .show tables
ALL_EMPS APPLES_OFFHEAP CUSTOMERS
DEPT EMP EMPLOYEES
EMPS_IN_DEPT_10 EMPS_IN_DEPT_20 EMPS_IN_DEPT_30
EMPS_IN_DEPT_40 OFFICES ORDERDETAILS
ORDERS PAYMENTS PERSON
PRODUCTLINES PRODUCTS TEST_ASYNC
TEST_ASYNC2 TEST_CALLBACKLISTENER
sqlshell> select * from dept;
Execution time: 0.21 seconds
Retrieval time: 0.6 seconds
7 rows returned.

DEPTNO DNAME LOC
------ ---------- --------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BRISBANE
50 MARKETING ADELAIDE
60 DEV PERTH
70 SUPPORT SYDNEY

sqlshell> .capture to /tmp/results.csv
Capturing result sets to: /tmp/results.csv

sqlshell> select * from emp where deptno = 10;
Execution time: 0.18 seconds
Retrieval time: 0.5 seconds
3 rows returned.

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
----- ------ --------- ---- --------------------- ---- ---- ------
7782 CLARK MANAGER 7839 1981/06/09 00:00:00.0 2450 NULL 10
7839 KING PRESIDENT NULL 1981/11/17 00:00:00.0 5000 NULL 10
7934 MILLER CLERK 7782 1982/01/23 00:00:00.0 1300 NULL 10

sqlshell> .capture off
No longer capturing query results.

sqlshell>
http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Spring XD Pivotal Gemfire Sink Demo

Wed, 2014-10-08 03:39
Spring XD is a unified, distributed, and extensible system for data ingestion, real time analytics, batch processing, and data export. The project's goal is to simplify the development of big data applications.

There are 2 implementation of the gemfire sink: gemfire-server and gemfire-json-server. They are identical except the latter converts JSON string payloads to a JSON document format proprietary to GemFire and provides JSON field access and query capabilities. If you are not using JSON, the gemfire-server module will write the payload using java serialization to the configured region.

In this example below we show how we connect to an existing GemFire 7.0.2 cluster using a locator to add some JSON trade symbols to an existing region in the cluster.

1. Start a GemFire cluster with with an existing region as shown below. The following cache.xml is for "server1" of the cluster and "server2" of the cluster. They are identical configs , just using different ports

server1 cache.xml
  
<?xml version="1.0"?>
<!DOCTYPE cache PUBLIC
"-//GemStone Systems, Inc.//GemFire Declarative Caching 7.0//EN"
"http://www.gemstone.com/dtd/cache7_0.dtd">

<cache>
<cache-server bind-address="localhost" port="40404" hostname-for-clients="localhost"/>

<region name="springxd-region">
<region-attributes data-policy="partition">
<partition-attributes redundant-copies="1" total-num-buckets="113"/>
<eviction-attributes>
<lru-heap-percentage action="overflow-to-disk"/>
</eviction-attributes>
</region-attributes>
</region>

<resource-manager critical-heap-percentage="75" eviction-heap-percentage="65"/>

</cache>

server2 cache.xml
  
<?xml version="1.0"?>
<!DOCTYPE cache PUBLIC
"-//GemStone Systems, Inc.//GemFire Declarative Caching 7.0//EN"
"http://www.gemstone.com/dtd/cache7_0.dtd">

<cache>
<cache-server bind-address="localhost" port="40405" hostname-for-clients="localhost"/>

<region name="springxd-region">
<region-attributes data-policy="partition">
<partition-attributes redundant-copies="1" total-num-buckets="113"/>
<eviction-attributes>
<lru-heap-percentage action="overflow-to-disk"/>
</eviction-attributes>
</region-attributes>
</region>

<resource-manager critical-heap-percentage="75" eviction-heap-percentage="65"/>

</cache>

2. Verify using GFSH you have 2 members , a locator and a region as follows
  
$ gfsh
_________________________ __
/ _____/ ______/ ______/ /____/ /
/ / __/ /___ /_____ / _____ /
/ /__/ / ____/ _____/ / / / /
/______/_/ /______/_/ /_/ v7.0.2.10

Monitor and Manage GemFire
gfsh>connect --locator=localhost[10334];
Connecting to Locator at [host=localhost, port=10334] ..
Connecting to Manager at [host=10.98.94.88, port=1099] ..
Successfully connected to: [host=10.98.94.88, port=1099]

gfsh>list members;
Name | Id
-------- | ---------------------------------------
server1 | 10.98.94.88(server1:10161)<v1>:15610
server2 | 10.98.94.88(server2:10164)<v2>:39300
locator1 | localhost(locator1:10159:locator):42885

gfsh>list regions;
List of regions
---------------
springxd-region

3. Start single node SpringXD server
  
[Wed Oct 08 14:51:06 papicella@:~/vmware/software/spring/spring-xd/spring-xd-1.0.1.RELEASE ] $ xd-singlenode

_____ __ _______
/ ___| (-) \ \ / / _ \
\ `--. _ __ _ __ _ _ __ __ _ \ V /| | | |
`--. \ '_ \| '__| | '_ \ / _` | / ^ \| | | |
/\__/ / |_) | | | | | | | (_| | / / \ \ |/ /
\____/| .__/|_| |_|_| |_|\__, | \/ \/___/
| | __/ |
|_| |___/
1.0.1.RELEASE eXtreme Data


Started : SingleNodeApplication
Documentation: https://github.com/spring-projects/spring-xd/wiki

....

4. Start SpringXD shell
  
$ xd-shell
_____ __ _______
/ ___| (-) \ \ / / _ \
\ `--. _ __ _ __ _ _ __ __ _ \ V /| | |
`--. \ '_ \| '__| | '_ \ / _` | / ^ \| | | |
/\__/ / |_) | | | | | | | (_| | / / \ \ |/ /
\____/| .__/|_| |_|_| |_|\__, | \/ \/___/
| | __/ |
|_| |___/
eXtreme Data
1.0.1.RELEASE | Admin Server Target: http://localhost:9393
Welcome to the Spring XD shell. For assistance hit TAB or type "help".
xd:>

5. Create a stream as follows
  
xd:>stream create --name gemfiredemo --definition "http --port=9090 | gemfire-json-server --host=localhost --port=10334 --useLocator=true --regionName=springxd-region --keyExpression=payload.getField('symbol')" --deploy
Created and deployed new stream 'gemfiredemo'

6. Post some entries via HTTP which will be inserted into the GemFire Region
  
xd:>http post --target http://localhost:9090 --data {"symbol":"ORCL","price":38}
> POST (text/plain;Charset=UTF-8) http://localhost:9090 {"symbol":"ORCL","price":38}
> 200 OK

xd:>http post --target http://localhost:9090 --data {"symbol":"VMW","price":94}
> POST (text/plain;Charset=UTF-8) http://localhost:9090 {"symbol":"VMW","price":94}
> 200 OK

7. Verify via GFSH that data has been inserted into the GemFire region. JSON data stored in GemFire regions is done using PDX.
  
gfsh>query --query="select * from /springxd-region";

Result : true
startCount : 0
endCount : 20
Rows : 2

symbol | price
------ | -----
ORCL | 38
VMW | 94

NEXT_STEP_NAME : END

More Information

SpringXD
http://projects.spring.io/spring-xd/

GemFire Sinks
http://docs.spring.io/spring-xd/docs/1.0.1.RELEASE/reference/html/#gemfire-serverhttp://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Pivotal GemFire 8 - Starting a Locator / Server directly from IntelliJ 13.x

Thu, 2014-09-25 18:42
With the introduction of Pivotal GemFire 8 developers can easily incorporate starting/stopping GemFire Locators/Servers directly within Java code allowing them to easily integrate GemFire  management within their IDE. This ensures developers can develop/test/run GemFire applications all within their IDE of choice making them much more productive using very simple Launcher API's

The locator is a Pivotal GemFire process that tells new, connecting members where running members are located and provides load balancing for server use. A GemFire server is a Pivotal GemFire process that runs as a long-lived, configurable member of a distributed system. The GemFire server is used primarily for hosting long-lived data regions and for running standard GemFire processes such as the server in a client/server configuration.

In this post I am going to show how we can use the following classes to launch a Pivotal GemFire  locator / server from code directly within IntelliJ IDEA allowing you to develop/test GemFire applications directly from your IDE of choice.

Note: In this post we use Intellij IDEA 13.x

com.gemstone.gemfire.distributed.LocatorLauncher API
com.gemstone.gemfire.distributed.ServerLauncher API

1. Add the GemFire 8 maven REPO to your project to ensure we pull the required JAR files.
  
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>gemfire-compression</groupId>
<artifactId>gemfire-compression</artifactId>
<version>1.0-SNAPSHOT</version>

<properties>
<gemfire.version>8.0.0</gemfire.version>
</properties>

<dependencies>
<dependency>
<groupId>com.gemstone.gemfire</groupId>
<artifactId>gemfire</artifactId>
<version>${gemfire.version}</version>
<scope>compile</scope>
</dependency>
</dependencies>

<repositories>
<repository>
<id>gemstone-release</id>
<name>GemStone Maven RELEASE Repository</name>
<url>http://dist.gemstone.com.s3.amazonaws.com/maven/release</url>
</repository>
</repositories>
</project>

2. Create a class as follows to start a locator
  
package pivotal.gemfire.compression;

import com.gemstone.gemfire.distributed.LocatorLauncher;

import java.util.concurrent.TimeUnit;

public class StartLocator
{
public static void main(String[] args)
{
LocatorLauncher locatorLauncher = new LocatorLauncher.Builder()
.set("jmx-manager", "true")
.set("jmx-manager-start", "true")
.set("jmx-manager-http-port", "8083")
.set("jmx-manager-ssl", "false")
.setMemberName("locator")
.setPort(10334)
.setBindAddress("localhost")
.build();

System.out.println("Attempting to start Locator");

locatorLauncher.start();

locatorLauncher.waitOnStatusResponse(30, 5, TimeUnit.SECONDS);

System.out.println("Locator successfully started");
}

}

3. Create a class as follow to start a single cache server, could create as many iof these as you need
  
package pivotal.gemfire.compression;

import com.gemstone.gemfire.distributed.ServerLauncher;

public class StartMember
{
public static void main(String[] args){
ServerLauncher serverLauncher = new ServerLauncher.Builder()
.setMemberName("server1")
.set("locators","localhost[10334]")
.set("cache-xml-file", "cache.xml")
.set("log-level", "info")
.build();

System.out.println("Attempting to start cache server");

serverLauncher.start();

System.out.println("Cache server successfully started");
}
}

4. Create a cache.xml with a dummy region
  
<!DOCTYPE cache PUBLIC
"-//GemStone Systems, Inc.//GemFire Declarative Caching 8.0//EN"
"http://www.gemstone.com/dtd/cache8_0.dtd">
<cache>
<cache-server bind-address="localhost" port="0" hostname-for-clients="localhost"/>

<region name="CompressedRegion">
<region-attributes data-policy="partition">
<key-constraint>java.lang.String</key-constraint>
<value-constraint>java.lang.String</value-constraint>
<partition-attributes redundant-copies="1" total-num-buckets="113"/>
<eviction-attributes>
<lru-heap-percentage action="overflow-to-disk"/>
</eviction-attributes>
</region-attributes>
</region>


<resource-manager critical-heap-percentage="75" eviction-heap-percentage="65"/>

</cache>

5. Edit the run configurations for StartLocator.java to include GEMFIRE env variable as shown below.


6. Run StartLocator.java as shown below.


7. Run StartMember.java as shown below.


8. Finally from the IDE run a script called verify.sh to view the cluster member/regions to ensure it worked.

verify.sh
  
#!/bin/bash

. ./setup.sh

gfsh <<EOF
connect --locator=localhost[10334];
list members;
list regions;
exit;
EOF

Output



More InformationPivotal GemFire Locator Processes
http://gemfire.docs.pivotal.io/latest/userguide/deploying/topics/running_the_locator.html

Pivotal GemFire Server Processes
http://gemfire.docs.pivotal.io/latest/userguide/deploying/topics/running_the_cacheserver.html

http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Creating a Pivotal GemFireXD Data Source Connection from IntelliJ IDEA 13.x

Wed, 2014-09-10 19:04
In order to create a Pivotal GemFireXD Data Source Connection from IntelliJ 13.x , follow the steps below. You will need to define a GemFireXD driver , prior to creating the Data Source itself.

1. Bring up the Databases panel.

2. Define a GemFireXD Driver as follows


3. Once defined select it by using the following options. Your using the Driver you created at #2 above

+ -> Data Source -> com.pivotal.gemfirexd.jdbc.ClientDriver 

4. Create a Connection as shown below. You would need to having a running GemFireXD cluster at this point in order to connect.



5.  Once connected you can browse objects as shown below.



6. Finally we can run DML/DDL directly from IntelliJ as shown below.


http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Variable in list with Postgres JDBC and Greenplum

Wed, 2014-09-03 21:59
I previously blogged on how to create a variable JDBC IN list with Oracle. Here is how you would do it with Pivotal Greenplum. Much easier , without having to write a function. In the Greenplum demo below we use the any function combined with string_to_array

http://theblasfrompas.blogspot.com.au/2008/02/variable-in-list-with-oracle-jdbc-and.html

Code as follows
  
import java.sql.*;
import java.sql.DriverManager;

/**
* Created by papicella on 4/09/2014.
*/
public class VariableInListGreenplum
{

public VariableInListGreenplum()
{
}

private Connection getConnection() throws SQLException, ClassNotFoundException
{
Class.forName("org.postgresql.Driver");
Connection conn = null;
conn = DriverManager.getConnection(
"jdbc:postgresql://127.0.0.1:5432/apples","pas", "pas");

return conn;
}

public void run() throws SQLException
{
Connection conn = null;
PreparedStatement stmt = null;
ResultSet rset = null;
String queryInList =
"SELECT DEPTNO, " +
" DNAME, " +
" LOC " +
"FROM scott.DEPT " +
"WHERE DEPTNO = any(string_to_array(?,', ')) ";

try
{
conn = getConnection();
stmt = conn.prepareStatement(queryInList);
stmt.setString(1, "10, 20, 30");
rset = stmt.executeQuery();

while (rset.next())
{
System.out.println("Dept [" + rset.getInt(1) + ", " +
rset.getString(2) + "]");
}
}
catch (Exception e)
{
System.out.println("Exception occurred");
e.printStackTrace();
}
finally
{
if (conn != null)
{
conn.close();
}

if (stmt != null)
{
stmt.close();
}

if (rset != null)
{
rset.close();
}
}
}

public static void main(String[] args) throws Exception
{
VariableInListGreenplum test = new VariableInListGreenplum();
test.run();
}
}
http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

REST with Pivotal GemFire 8.0

Tue, 2014-09-02 21:32
Pivotal GemFire 8.0 now includes REST support. You can read more about it as follows

http://gemfire.docs.pivotal.io/latest/userguide/gemfire_rest/book_intro.html#concept_7628F498DB534A2D8A99748F5DA5DC94

Here is how we set it up and some quick examples showing how it works with some Region data
In the example below I have PDX setup for the cache servers as shown below.
  
<!DOCTYPE cache PUBLIC
"-//GemStone Systems, Inc.//GemFire Declarative Caching 8.0//EN"
"http://www.gemstone.com/dtd/cache8_0.dtd">
<cache>
<pdx read-serialized="true">
<pdx-serializer>
<class-name>com.gemstone.gemfire.pdx.ReflectionBasedAutoSerializer</class-name>
<parameter name="classes">
<string>org\.pivotal\.pas\.beans\..*</string>
</parameter>
</pdx-serializer>
</pdx>

.....

1. Firstly you need to enable the REST on a cache server node as shown below. Basically set gemfire.start-dev-rest-api to TRUE , you could use a gemfire.properties file but here we just pass it to GFSH as part of the server start command.

start server --name=server1 --classpath=$CLASSPATH --server-port=40411 --cache-xml-file=./server1/cache.xml --properties-file=./server1/gemfire.properties --locators=localhost[10334] --dir=server1 --initial-heap=1g --max-heap=1g --J=-Dgemfire.http-service-port=7070 --J=-Dgemfire.http-service-bind-address=localhost --J=-Dgemfire.start-dev-rest-api=true

2. Once started we can quickly ensure we have the REST server up on port 7070 as shown below.

[Wed Sep 03 12:39:18 papicella@:~/ant-demos/gemfire/80/demo ] $ netstat -an | grep 7070
tcp4       0      0  127.0.0.1.7070         *.*                    LISTEN

3. Next test that you can access the REST server. The command below will list all the regions available in the cluster.

[Wed Sep 03 12:52:44 papicella@:~/ant-demos/gemfire/80/demo/rest ] $ curl -i http://localhost:7070/gemfire-api/v1
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Location: http://localhost:7070/gemfire-api/v1
Accept-Charset: big5, big5-hkscs, euc-jp, euc-kr, gb18030, gb2312, gbk, ibm-thai, ibm00858, ibm01140, ibm01141, ibm01142, ibm01143, ibm01144, ibm01145, ibm01146, ibm01147, ibm01148, ibm01149, ibm037, ibm1026, ibm1047, ibm273, ibm277, ibm278, ibm280, ibm284, ibm285, ibm290, ibm297, ibm420, ibm424, ibm437, ibm500, ibm775, ibm850, ibm852, ibm855, ibm857, ibm860, ibm861, ibm862, ibm863, ibm864, ibm865, ibm866, ibm868, ibm869, ibm870, ibm871, ibm918, iso-2022-cn, iso-2022-jp, iso-2022-jp-2, iso-2022-kr, iso-8859-1, iso-8859-13, iso-8859-15, iso-8859-2, iso-8859-3, iso-8859-4, iso-8859-5, iso-8859-6, iso-8859-7, iso-8859-8, iso-8859-9, jis_x0201, jis_x0212-1990, koi8-r, koi8-u, shift_jis, tis-620, us-ascii, utf-16, utf-16be, utf-16le, utf-32, utf-32be, utf-32le, utf-8, windows-1250, windows-1251, windows-1252, windows-1253, windows-1254, windows-1255, windows-1256, windows-1257, windows-1258, windows-31j, x-big5-hkscs-2001, x-big5-solaris, x-compound_text, x-euc-jp-linux, x-euc-tw, x-eucjp-open, x-ibm1006, x-ibm1025, x-ibm1046, x-ibm1097, x-ibm1098, x-ibm1112, x-ibm1122, x-ibm1123, x-ibm1124, x-ibm1364, x-ibm1381, x-ibm1383, x-ibm300, x-ibm33722, x-ibm737, x-ibm833, x-ibm834, x-ibm856, x-ibm874, x-ibm875, x-ibm921, x-ibm922, x-ibm930, x-ibm933, x-ibm935, x-ibm937, x-ibm939, x-ibm942, x-ibm942c, x-ibm943, x-ibm943c, x-ibm948, x-ibm949, x-ibm949c, x-ibm950, x-ibm964, x-ibm970, x-iscii91, x-iso-2022-cn-cns, x-iso-2022-cn-gb, x-iso-8859-11, x-jis0208, x-jisautodetect, x-johab, x-macarabic, x-maccentraleurope, x-maccroatian, x-maccyrillic, x-macdingbat, x-macgreek, x-machebrew, x-maciceland, x-macroman, x-macromania, x-macsymbol, x-macthai, x-macturkish, x-macukraine, x-ms932_0213, x-ms950-hkscs, x-ms950-hkscs-xp, x-mswin-936, x-pck, x-sjis_0213, x-utf-16le-bom, x-utf-32be-bom, x-utf-32le-bom, x-windows-50220, x-windows-50221, x-windows-874, x-windows-949, x-windows-950, x-windows-iso2022jp
Content-Type: application/json
Content-Length: 493
Date: Wed, 03 Sep 2014 02:52:46 GMT

{
  "regions" : [ {
    "name" : "demoRegion",
    "type" : "PARTITION",
    "key-constraint" : null,
    "value-constraint" : null
  }, {
    "name" : "departments",
    "type" : "PARTITION",
    "key-constraint" : null,
    "value-constraint" : null
  }, {
    "name" : "employees",
    "type" : "PARTITION",
    "key-constraint" : null,
    "value-constraint" : null
  }, {
    "name" : "complex",
    "type" : "PARTITION",
    "key-constraint" : null,
    "value-constraint" : null
  } ]

4. We have a couple of regions in this cluster and once again I am using the classic DEPT/EMP regions here. Some simple REST command belows on the "/departments" region

View all DEPARTMENT region entries

[Wed Sep 03 12:53:38 papicella@:~/ant-demos/gemfire/80/demo/rest ] $ curl -i http://localhost:7070/gemfire-api/v1/departments
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Content-Location: http://localhost:7070/gemfire-api/v1/departments/20,10,30,40
Content-Type: application/json
Content-Length: 225
Date: Wed, 03 Sep 2014 02:53:40 GMT

{
  "departments" : [ {
    "deptno" : 20,
    "name" : "RESEARCH"
  }, {
    "deptno" : 10,
    "name" : "ACCOUNTING"
  }, {
    "deptno" : 30,
    "name" : "SALES"
  }, {
    "deptno" : 40,
    "name" : "OPERATIONS"
  } ]
}

VIEW a single region entry by KEY

[Wed Sep 03 12:55:34 papicella@:~/ant-demos/gemfire/80/demo/rest ] $ curl -i http://localhost:7070/gemfire-api/v1/departments/10
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Content-Location: http://localhost:7070/gemfire-api/v1/departments/10
Content-Type: application/json
Content-Length: 44
Date: Wed, 03 Sep 2014 02:55:36 GMT

{
  "deptno" : 10,
  "name" : "ACCOUNTING"
}

VIEW multiple entries by KEY

[Wed Sep 03 12:56:25 papicella@:~/ant-demos/gemfire/80/demo/rest ] $ curl -i http://localhost:7070/gemfire-api/v1/departments/10,30
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Content-Location: http://localhost:7070/gemfire-api/v1/departments/10,30
Content-Type: application/json
Content-Length: 123
Date: Wed, 03 Sep 2014 02:56:28 GMT

{
  "departments" : [ {
    "deptno" : 10,
    "name" : "ACCOUNTING"
  }, {
    "deptno" : 30,
    "name" : "SALES"
  } ]
}

5. We can even use the Spring REST shell as shown below.

Obtain rest-shell using the link below.

https://github.com/spring-projects/rest-shell

  
[Wed Sep 03 13:06:22 papicella@:~ ] $ rest-shell

___ ___ __ _____ __ _ _ _ _ __
| _ \ __/' _/_ _/' _/| || | / / | \ \
| v / _|`._`. | | `._`.| >< | / / / > >
|_|_\___|___/ |_| |___/|_||_| |_/_/ /_/
1.2.1.RELEASE

Welcome to the REST shell. For assistance hit TAB or type "help".
http://localhost:8080:> baseUri http://localhost:7070/
Base URI set to 'http://localhost:7070'
http://localhost:7070:> follow gemfire-api
http://localhost:7070/gemfire-api:> follow v1
http://localhost:7070/gemfire-api/v1:> follow departments
http://localhost:7070/gemfire-api/v1/departments:> get 20
> GET http://localhost:7070/gemfire-api/v1/departments/20

< 200 OK
< Server: Apache-Coyote/1.1
< Content-Location: http://localhost:7070/gemfire-api/v1/departments/20
< Content-Type: application/json
< Content-Length: 42
< Date: Wed, 03 Sep 2014 03:07:17 GMT
<
{
"deptno" : 20,
"name" : "RESEARCH"
}
http://localhost:7070/gemfire-api/v1/departments:>

6. Open a browser and enter the following URL to browse the Swagger-enabled REST APIs:

http://localhost:7070/gemfire-api/docs/index.html



7. Perform an operation as shown below.




http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Dept/Emp POJO's with sample data for Pivotal GemFire

Tue, 2014-08-12 21:57
I constantly blog about using DEPARTMENT/EMPLOYEE POJO'S with sample data. Here is how to create a file with data to load into GemFire to give you that sample set.

Note: You would need to create POJO'S for Department/Empployee objects that have getter/setter for the attributes mentioned below.

Dept Data

put --key=10 --value=('deptno':10,'name':'ACCOUNTING') --value-class=pivotal.au.se.deptemp.beans.Department --region=departments;
put --key=20 --value=('deptno':20,'name':'RESEARCH') --value-class=pivotal.au.se.deptemp.beans.Department --region=departments;
put --key=30 --value=('deptno':30,'name':'SALES') --value-class=pivotal.au.se.deptemp.beans.Department --region=departments;
put --key=40 --value=('deptno':40,'name':'OPERATIONS') --value-class=pivotal.au.se.deptemp.beans.Department --region=departments;

Emp Data

put --key=7369 --value=('empno':7369,'name':'SMITH','job':'CLERK','deptno':20) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7370 --value=('empno':7370,'name':'APPLES','job':'MANAGER','deptno':10) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7371 --value=('empno':7371,'name':'APICELLA','job':'SALESMAN','deptno':10) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7372 --value=('empno':7372,'name':'LUCIA','job':'PRESIDENT','deptno':30) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7373 --value=('empno':7373,'name':'SIENA','job':'CLERK','deptno':40) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7374 --value=('empno':7374,'name':'LUCAS','job':'SALESMAN','deptno':10) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7375 --value=('empno':7375,'name':'ROB','job':'CLERK','deptno':30) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7376 --value=('empno':7376,'name':'ADRIAN','job':'CLERK','deptno':20) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7377 --value=('empno':7377,'name':'ADAM','job':'CLERK','deptno':20) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7378 --value=('empno':7378,'name':'SALLY','job':'MANAGER','deptno':20) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7379 --value=('empno':7379,'name':'FRANK','job':'CLERK','deptno':10) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7380 --value=('empno':7380,'name':'BLACK','job':'CLERK','deptno':40) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7381 --value=('empno':7381,'name':'BROWN','job':'SALESMAN','deptno':40) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;

Load into GemFire (Assumed JAR for POJO'S exists in class path of GemFireCache Servers)

The script bellows uses GFSH to load the file into the correct region references the correct POJO inside the files created above.

export CUR_DIR=`pwd`

gfsh <<!
connect --locator=localhost[10334];
run --file=$CUR_DIR/dept-data
run --file=$CUR_DIR/emp-data
!

Below is what the Department.java POJO would look like for example.
  
package pivotal.au.se.deptemp.beans;

public class Department
{
private int deptno;
private String name;

public Department()
{
}

public Department(int deptno, String name) {
super();
this.deptno = deptno;
this.name = name;
}

public int getDeptno() {
return deptno;
}

public void setDeptno(int deptno) {
this.deptno = deptno;
}

public String getName() {
return name;
}

public void setName(String name) {
this.name = name;
}

@Override
public String toString() {
return "Department [deptno=" + deptno + ", name=" + name + "]";
}

}
http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Using HAWQ with PHD service in PCF 1.2

Mon, 2014-07-28 06:01
The following demo shows how to use the PCF 1.2 PHD service with HAWQ by loading data into the PCF PaaS platform.

1. First lets setup our ENV to use the correct version of HADOOP on our local laptop.

export HADOOP_INSTALL=/Users/papicella/vmware/software/hadoop/hadoop-2.0.5-alpha
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home

export PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin
export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"

export HADOOP_OPTS="$HADOOP_OPTS  -Djava.awt.headless=true -Djava.security.krb5.realm=-Djava.security.krb5.kdc="

export YARN_OPTS="$YARN_OPTS -Djava.security.krb5.realm=OX.AC.UK -Djava.security.krb5.kdc=kdc0.ox.ac.uk:kdc1.ox.ac.uk -Djava.awt.headless=true"

hadoop version

2. Set the HADOOP_USER_NAME to ensure you have write access to load a file.

export HADOOP_USER_NAME=ucc3a04008db2486

3. Create a file called person.txt with some pipe delimited data , example below.

[Mon Jul 28 21:47:37 papicella@:~/vmware/software/hadoop/cloud-foundry/pcf12/demo ] $ head person.txt
1|person1
2|person2
3|person3
4|person4
5|person5

4. Load the file into the PHD instance running in PCF 1.2. You will need to use the name node / path which is correct for your PHD instance.

[Mon Jul 28 21:51:43 papicella@:~/vmware/software/hadoop/cloud-foundry/pcf12/demo ] $ hadoop fs -put person.txt hdfs://x.x.x.x:8020/user/ucc3a04008db2486/

5. Create a HAWQ table to the file person.txt using PXF as shown below.

CREATE EXTERNAL TABLE person (id int, name text)
LOCATION ('pxf://x.x.x.x:50070/user/ucc3a04008db2486/person.txt?Fragmenter=HdfsDataFragmenter&Accessor=TextFileAccessor&Resolver=TextResolver')
FORMAT 'TEXT' (DELIMITER = '|');

6. Query the table as shown below.



For more information on the PHD service see the link below.

http://docs.pivotal.io/pivotalhd-ds/index.html

http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware