Skip navigation.

Feed aggregator

PeopleTools 8.54 Features: Dynamic Alert Sliding Windows

Javier Delgado - Fri, 2014-10-24 02:57
One of my first memories in the PeopleSoft world was from by training bootcamp when I joined PeopleSoft. The instructor was showing us the Process Monitor functionality in PeopleSoft 7.5, where the button used to refresh the list of scheduled processes was represented by fetching dog named Sparky shown to the right of this paragraph.

It actually surprised me that an application button had a name, but that was the PeopleSoft style. Anyway, the poor dog did not last too much. In year 2000, with the introduction of PeopleSoft 8, our beloved Sparky was replaced by a boring Refresh button.

PeopleTools 8.54 has pushed this functionality to the next generation, making it potentially redundant. One of the new features in this release is the ability to show the status of the process within the same page were it is scheduled. This is a major usability improvement, as the users do not need to navigate to Process Monitor to check the status of the process instance. True, in previous PeopleTools versions there was also the possibility of running the process with output to Window, which using REN Server would achieve a similar result. The main drawback of REN Server is that it opened a new page/tab even before the process was finished, making the navigation more complicated.

The new functionality is called Dynamic Alert Sliding Windows, which is still more boring than Sparky, but what matters is the functionality, not the name. These notifications are enabled in the Process Scheduler System Settings page:


In this page, the administrator chooses which Status are going to be displayed to the user when running a process. As you see, the functionality is quite easy to setup and a significant step forward in usability of batch process scheduling.



PeopleSoft's PS_HOME evolution

Javier Delgado - Fri, 2014-10-24 02:56
One of the new features of PeopleTools 8.54 is the portability of the PS_HOME directory. Before going into the analysis of its benefits, let's have a look back to how  PS_HOME has evolved.

One Directory for Everything

PS_HOME is the name of the environment variable holding the PeopleSoft installation directory. Before PeopleTools 8.50, the full PeopleSoft installation was done on a single directory, including PeopleTools binaries, application external files, customized files, logs, etc. Also, in those installations using WebLogic and WebSphere, the J2EE deployment was normally located at PS_HOME/webserv (this was not the case for Oracle Application Server, which its their own directories for that purpose).

The main issue with this approach is that the Ops team would normally go nuts when they saw how the directories were structured in PeopleSoft. Very often, keeping read-only binary files and always changing log files on the same directory structure would not comply with the internal policies in many organizations. With some degree of manual configuration and symbolic linking, this issue could be tackled, but the solution increased the maintenance costs, particularly when a PeopleTools or application upgrade came into the scene.

Splitting Logic and Data

PeopleTools 8.50 provided the ability to split the PS_HOME directory contents into three different places:
  • PIA_HOME: contained the J2EE deployment, equivalent to the former PS_HOME/webserv directory.
  • PS_CFG_HOME: contained logs, traces and search indexes. Basically, any file created, modified or deleted at run time.
  • PS_HOME: contained the binaries and external programs such as Crystal Reports. Cobols and SQRs.
This was a major improvement. Now the binaries could be kept as read-only except when an external program was migrated. Moreover, the monitoring of disk space could now be restricted to PIA_HOME and PS_CFG_HOME.

PeopleTools and Applications in Different Rooms

PeopleTools 8.52, together with the PeopleSoft 9.1 applications, introduced a new directory: PS_APP_HOME. This directory contained exclusively the application binaries and external program files, leaving PS_HOME just for the specific PeopleTools files.

This approach allowed a simpler maintenance of the product. For instance, you could use the same PS_HOME for both PeopleSoft HCM and FSCM, keeping the specific application files in their own PS_APP_HOME directories. This way, when you applied a PeopleTools patch on PS_HOME, it would be available for all applications.

Clearly Identify your Customizations

The natural evolution of PS_APP_HOME was PS_CUST_HOME, which was introduced by PeopleTools 8.53. This new directory was meant to hold all the customized external files. This helped not only in maintaining PS_HOME and PS_APP_HOME almost readonly (they would be updated only by PeopleTools or application upgrades), but also to clearly identify the customizations, which is a tremendous gain when performing an application upgrade.

And now... Portable PS_HOME

PeopleTools 8.54 has gone a step further in simplifying the maintenance of the PeopleSoft installation. One of the issues we still faced with PS_HOME is that we could not move it to a different directory without facing issues, as there were some symbolic links and files containing absolute directory references within it.

This could be solved by adjusting the symbolic links and directory references, but it was a time consuming process. The alternative was to reinstall PS_HOME from the delivered install images, but in the best scenario, this could take a couple of hours.

In the latest PeopleTools release, all symbolic links were removed, and all the directory references are relative, not absolute. This allows the system administrator to easily move the directory to another location, or even to another server. Actually, you may not even need to move it. Just mounting the PS_HOME directory installed in one server into all the different PeopleSoft servers would make the trick, so you only need to apply changes in a single place.

I'm sure System Administrators and Installers will love this new feature. At BNB we are also analyzing other potential uses for it, but let me keep the secret for the moment ;).

Tip: One of the symbolic links removed in UNIX/Linux platforms was the PS_HOME/appserv/psadmin link. If you have any maintenance script to boot or shutdown services using this path, you will need to adjust it to the source location: PS_HOME/bin/psadmin, or just call psadmin after executing psconfig.sh.

Ordina Open Wereld 2014

Marco Gralike - Fri, 2014-10-24 02:18
Werelds grootste IT conferentie, Oracle OpenWorld in San Francisco is voorbij. Meerdere Ordina mensen zijn…

Les journées SQL Server 2014 - un event qui devrait vous intéresser!

Yann Neuhaus - Fri, 2014-10-24 02:05

Comme vous le savez peut-être déjà, le plus grand événement SQL Server francophone arrive bientôt. Les journées SQL Server 2014, organisées par le GUSS, (Groupement des Utilisateurs SQL Server) sont à Paris les 1 et 2 décembre 2014. dbi services y est aussi - et vous?

 

Les journées SQL Server 2014

JSS_2014.png

Voici le site de l'event: http://guss.pro/2014/12/01/journees-sql-server-2014/

Les 2 jours traitent aussi bien de l'administration, du développement que de la BI SQL Server 2014.

A cette occasion, je vous annonce que dbi services sera présent avec 2 sessions:

 

  • Lundi 01/12/2014, 10h30: Infrastructure et Always-On - par David Barbarin

Une session basée sur des retours d'expérience concernant les architectures SQL Server AlwaysOn et les groupes de disponibilités on-Premise chez différents clients. Les problématiques que nous aborderons sont diverses et variées et concernent aussi bien les phases d'implémentation d'une architecture AlwaysOn et ainsi que l'opérationnel.

 

  • Mardi 02/12/2014, 10h30: Sécurité via policies-  par Stéphane Haby

Fort de mon expérience dans le domaine bancaire (suisse bien sûr), je vous propose une manière simple de contrôler la sécurité via les policies. Quels types de contrôle doit-on mettre en place au niveau de l'instance, au niveau de la base de données? Comment les gérer sur plusieurs serveurs? Avoir de beaux petits rapports ou e-mails en retour…

Essayons de répondre ensemble à toutes ces petites questions que nous avons au niveau des audits de sécurité.

 

Vous retrouvez tout l'agenda des sessions ici. dbi services aura également un stand ou vous pourrez venir échanger avec nos experts SQL Server.

N'hésitez pas à venir pour nous retrouver lors de cet événement francophone important! En plus de sa beauté, Paris n’est pas si loin de la Suisse. Wink

LISTedTECH: New wiki site and great visualizations

Michael Feldstein - Thu, 2014-10-23 18:06

Last year I wrote about a relatively new site offering very interesting data and visualizations in the ed tech world. LISTedTECH was created by Justin Menard, who is Business Intelligence Senior Analyst at University of Ottawa. First of all, the site is broader in scope than just the LMS – there is a rich source of data & visualizations on MOOCs, university rankings, and IPEDS data. Most of the visualizations are presented by Tableau and therefore interactive in nature, allowing the user to filter data, zoom in on geographic data, etc. Since e-Literate is not set up for full-page visualizations, I have included screen shots below, but clicking on the image will take you to the appropriate LISTedTECH page.

Top_Learning_Management_System__LMS__by_State_or_Province_-_LISTedTECH

Justin created the LISTedTECH site based on his frustration with getting valuable market information while working on an ERP project at the University of Ottawa. After taking a year-long travel sabbatical, he added a programmer to his team this past summer. Justin does not have immediate plans to monetize the site beyond hoping to pay for server time.

LISTedTECH is a wiki. Anyone can sign up and contribute data on institutions and products. Justin gets notifications of data added and verifies data.[1] One of the key benefits of a wiki model is the ability to get user-defined data and even user ideas on useful data to include. Another benefit is the ability to scale. One of the key downsides of the wiki model is the need to clean out bad data, which can grow over time. Another downside is the selective sampling in data coverage.

LISTedTECH puts a priority on North America, and currently all ~140 Canadian schools are included. Justin and team are currently working to get complete, or near complete, US coverage. The one below could be titled If Ed Tech Were a Game of Risk, Moodle Wins.

World_Map_of_Learning_Management_Systems_08_2013_-_LISTedTECH

As of today the site includes:

  • Compan‏ies (511)
  • Products (1,326)
  • Institutions‏‎ (27,595)
  • Listed products used by institutions (over 18,000)
  • Product Categories‏‎ (36)
  • Countries (235)
  • World Ranking‏‎s (9)

The biggest change since I wrote last year is that LISTedTech has moved to a new site.

We have (finally) launched our new website wiki.listedtech.com. As you might remember, our old Drupal based site had been wikified to try and make contributions easier and try to build a community around HigherEd tech data. Even if it was possible to edit and share information, it was difficult to get all the steps down, and in the right order.

With the new version of the site, we knew that we needed a better tool. The obvious choice was to use the Mediawiki platform. To attain our goal of better data, we souped it up with semantic extensions. This helps by structuring the data on the pages so that they can be queried like a database.

Another example shows the history of commercial MOOCs based on the number of partner institutions:

MOOCs__a_short_history_-_LISTedTECH

I’m a sucker for great visualizations, and there is a lot to see at the site. One example is on blended learning and student retention, using official IPEDS data in the US. “Blended in this case means that the institution offers a mix of face-to-face and online courses.

Blended_Learning_and_Student_Retention_-_LISTedTECH

This is interesting – for 4-year institutions student retention positively there is a negative correlation with the percentage of courses available online, while for 2-year institutions the story is very different. That data invites additional questions and exploration.

All of the data for the website is available for download as XML files.

  1. He asks for people to include a link to source data to help in the QA process.

The post LISTedTECH: New wiki site and great visualizations appeared first on e-Literate.

What Faculty Should Know About Competency-Based Education

Michael Feldstein - Thu, 2014-10-23 16:26

I loved the title of Phil’s recent post, “Competency-Based Education: Not just a drinking game” because it acknowledges that, whatever else CBE is, it is also a drinking game. The hype is huge and still growing. I have been thinking a lot lately about Gartner’s hype cycle and how it plays out in academia. In a way, it was really at the heart of the Duke keynote speech I posted the other day. There are a lot of factors that amplify it and make it more pernicious in the academic ecosystem than it is elsewhere. But it’s a tough beast to tackle.

I got some good responses to the “what faculty should know…” format that I used for a post about adaptive learning, so I’m going to try it again here in somewhat modified form. Let me know what you think of the format.

What Competency-Based Education (CBE) Is

The basic idea behind CBE is that what a student learns to pass a course (or program) should be fixed while the time it takes to do so should be variable. In our current education system, a student might have 15 weeks to master the material covered in a course and will receive a grade based on how much of the material she has mastered. CBE takes the position that the student should be able to take either more or less time than 15 weeks but should only be certified for completing the course when she has mastered all the elements. When a student registers for a course, she is in it until she passes the assessments for the course. If she comes in already knowing a lot and can pass the assessments in a few weeks—or even immediately—then she gets out quickly. If she is not ready to pass the assessments at the end of 15 weeks, she keeps working until she is ready.

Unfortunately, the term “CBE” is used very loosely and may have different connotations in different contexts. First, when “competency-based education” was first coined, it was positioned explicitly against similar approaches (like “outcomes-based education” and “mastery learning”) in that CBE was intended to be vocationally oriented. In other words, one of the things that CBE was intended to accomplish by specifying competencies was to ensure that what the students are learning is relevant to job skills. CBE has lost that explicit meaning in popular usage, but a vocational focus is often (but not always) present in the subtext.

Also, competencies increasingly feature prominently even in classes that do not have variable time. This is particularly true with commercial courseware. Vendors are grouping machine-graded assessment questions into “learning objectives” or competencies that are explicitly tied to instructional readings, videos, and so on. Rather than reporting that the student got quiz questions 23 through 26 wrong, the software is reporting that the student is not able to answer questions on calculating angular momentum, which was covered in the second section of Chapter 3. Building on this helpful but relatively modest innovation, courseware products are providing increasingly sophisticated support to both students and teachers on areas of the course (or “competencies”) where students are getting stuck. This really isn’t CBE in the way the term was originally intended but is often lumped together with CBE.

What It’s Good For

Because the term “CBE” is used for very different approaches, it is important to distinguish among them in terms of their upsides and downsides. Applying machine-driven competency-based assessments within a standard, time-based class is useful and helpful largely to the extent that machine-based assessment is useful and helpful. If you already are comfortable using software to quiz your students, then you will probably find competency-based assessments to be an improvement in that they provide improved feedback. This is especially true for skills that build on each other. If a student doesn’t master the first skill in such a sequence, she is unlikely to master the later skills that depend on it. A competency-based assessment system can help identify this sort of problem early so that the student doesn’t suffer increasing frustration and failure throughout the course just because she needs a little more help on one concept.

Thinking about your (time-based) course in terms of competencies, whether they are assessed by a machine or by a teacher, is also a useful tool in terms of helping you as a teacher shift your thinking from what it is you want to teach to what it is you want your students to learn—and how you will know that they have learned it. Part of defining a competency is defining how you will know when a student has achieved it. Thinking about your courses this way can not only help you design your courses better but also help when it is time to talk to your colleagues about program-level or even college-level goals. In fact, many faculty encounter the word “competency” for the first time in their professional context when discussing core competencies on a college-wide basis as part of the general education program. If you  have participated in these sorts of conversations, then you may well have found them simultaneously enlightening and incredibly frustrating. Defining competencies well is hard, and defining them so that they make sense across disciplines is even harder. But if faculty are engaged in thinking about competencies on a regular basis, both as individual teachers and as part of a college or disciplinary community, then they will begin to help each other articulate and develop their competencies around working with competencies.

Assuming that the competencies and assessments are defined well, then moving from a traditional time- or term-based structure to full go-at-your-own-pace CBE can help students by enabling those students who are especially bright or come in with prior knowledge and experience to advance quickly, while giving students who just need a little more time the chance they need to succeed. Both of these aspects are particularly important for non-traditional students[1] who come into college with life experience but also need help making school work with their work and life schedules—and who may very well have dropped out of college previously because they got stuck on a concept here or there and never got help to get past it.

What To Watch Out For

All that said, there are considerable risks attached to CBE. As with just about anything else in educational technology, one of the biggest has more to do with the tendency of technology products to get hyped than it does with the underlying ideas or technologies themselves. Schools and vendors alike, seeing a huge potential market of non-traditional students, are increasingly talking about CBE as a silver bullet. It is touted as more “personalized” than traditional courses in the sense that students can go at their own pace, and it “scales”—if the assessments are largely machine graded. This last piece is where CBE goes off the tracks pretty quickly. Along with the drive to service a large number of students at lower cost comes a strong temptation to dumb down competencies to the point where they can be entirely machine graded. Again, this probably doesn’t do much damage to traditional courses or programs that are already machine graded, it can do considerable damage in cases where the courses are not. And because CBE programs are typically aimed a working class students who can’t afford to go full-time, CBE runs the risk of making what is already a weaker educational experience in many cases (relative to expensive liberal arts colleges with small class sizes) worse by watering down standards for success and reducing the human support, all while advertising itself as “personalized.”

A second potential problem is that, even if the competencies are not watered down, creating a go-at-your-own-pace program makes social learning more of a challenge. If students are not all working on the same material at the same time, then they may have more difficulty finding peers they can work with. This is by no means an insurmountable design problem, but it is one that some existing CBE programs have failed to surmount.

Third, there are profound labor implications for moving from a time-based structure to CBE, starting with the fact that most contracts are negotiated around the number of credit hours faculty are expected to teach in a term. Negotiating a move from a time-based program to full CBE is far from straightforward.

Recomendations

CBE offers the potential to do a lot of good where it is implemented well and a lot of harm where it is implemented poorly. There are steps faculty can take to increase the chances of a positive outcome.

First, experiment with machine-graded competency-based programs in your traditional, time-based classes if and only if your are persuaded that the machine is capable of assessing the students well at what it is supposed to assess. My advice here is very similar to the advice I gave regarding adaptive learning, which is to think about the software as a tutor and to use, supervise, and assess its effectiveness accordingly. If you think that a particular software product can provide your students with accurate guidance regarding which concepts they are getting and which ones that they are not getting within a meaningful subset of what you are teaching, then it may be worth trying. But there is nothing magical about the word “competency.” If you don’t think that software can assess the skills that you want to assess, then competency-based software will be just as bad at it.

Second, try to spend a little time as you prepare for a new semester to think about your course in terms of competencies and refine your design at least a bit with each iteration. What are you trying to get students to know? What skills do you want them to have? How will you know if they have succeeded in acquiring that knowledge and those skills? How are your assessments connected to your goals? How are your lectures and course materials connected to them? To what degree are the connections clear and explicit?

Third, familiarize yourself with CBE efforts that are relevant to your institution and discipline, particularly if they are driven by organizations that you respect. For example, the American Association of Colleges and Universities (AAC&U) has created a list of competencies called the Degree Qualifications Profile (DQP) and a set of assessment rubrics called Valid Assessment of Learning in Undergraduate Education (VALUE). While these programs are consistent with and supportive of designing a CBE program, they focus on defining competencies students should receive from a high-quality liberal arts education and emphasize the use of rubrics applied by expert faculty for assessment over machine grading.

And finally, if your institution moves in the direction of developing a full CBE program, ask the hard questions, particularly about quality. What are the standards for competencies and assessments? Are they intended to be the same as for the school’s traditional time-based program? If so, then how will we know that they have succeeded in upholding those standards? If not, then what will the standards be, and why are they appropriate for the students who will be served by the program?

 

  1. The term “non-traditional is really out-of-date, since at many schools students who are working full-time while going to school are the rule rather than the exception. However, since I don’t know of a better term, I’m sticking with non-traditional for now.

The post What Faculty Should Know About Competency-Based Education appeared first on e-Literate.

ADF BC View Object SQL Query Customization with MDS

Andrejus Baranovski - Thu, 2014-10-23 12:51
This post is based on my previous article about MDS Seeded Customization - MDS Seeded Customization Approach with Empty External Project. Today I will focus on explaining how to customise SQL query for read-only VO. This is not so obvious as it sounds. However, it is doable - I will explain how.

Sample application - MDSCustomizationsApp_v2.zip contains both main and customisation projects. Main project implements read-only VO for Jobs data, it doesn't include Job Title attribute:


This is how SQL query looks for such VO in the wizard:


We should take a look into the source code, SQL query is defined by SQLQuery tag. Tag doesn't have ID, this would disable tag customisation using JDEV MDS wizards:


No matter there is no ID set, we still could customise it - I will show you how. Here is the example of SQL query customisation stored in MDS customisation file. Firstly you should remove SQL query completely from VO and then add it again (this will create two entries in MDS customisation file). Once SQL query is written to the MDS customisation file, we can change it as we would like to - Job Title attribute is added to the SQL statement:


We need to add separately VO attribute for Job Title, this can be done through JDEV MDS customisation wizard. Make sure to set IsSelected=true property, otherwise attribute will not be set from the query:


You should drag and drop Job Title attribute from Data Control into the fragment, this will generate binding entry in the page definition file:


UI component for Job Title will be generated as ADF Faces output text:


You should notice on runtime customised SQL statement executed, this statement would include Job Title added through MDS customisation:


Job Title value is fetched and displayed on UI:

The Benefits of Integrating a Google Search Appliance with an Oracle WebCenter or Liferay Portal

This month, the Fishbowl team presented two webinars on integrating a Google Search Appliance with a WebCenter or Liferay Portal. Our new product, the GSA Portal Search Suite, makes integration simple and also allows for customization to create a seamless, secure search experience. It brings a powerful, Google-like search experience directly to your portal.

The first webinar, “The Benefits of Google Search for your Oracle WebCenter or Liferay Portal”, focused on the Google Search Appliance and the positive experiences users have had with incorporating Google search in the enterprise.

 

The second webinar, “Integrating the Google Search Appliance with a WebCenter or Liferay Portal”, dove deeper into how the GSA Portal Search Suite and how it improves the integration process.

 

The following is a list of questions and answers from the webinar series. If you have any other questions, please feel free to reach out to the Fishbowl team!

Q. What version of SharePoint does this product work with?

A. This product is not designed to work with SharePoint. Google has a SharePoint connector that indexes content from SharePoint and pulls it into the GSA, and then the GSA Portal Search Suite would allow any of that content to be served up in your portal.

Fishbowl also has a product called SharePoint Connector that connects SharePoint with Oracle WebCenter Content.

Q. Is Fishbowl a reseller of the GSA? Where can I get a GSA?

A. Yes, we sell the GSA, as well as add-on products and consulting services for the GSA. Visit our website for more information about our GSA services.

Q. What is the difficulty level of customizing the XSLT front end? How long would it take to roll out?

A. This will depend on what you’re trying to customize. If it’s just colors, headers, etc., you could do it pretty quickly because the difficulty level is fairly low. If you’re looking at doing a full-scale customization and entirely changing the look and feel, that could take a lot longer – I would say upwards of a month. The real challenge is that there isn’t a lot of documentation from Google on how to do it, so you would have to do a lot of experimentation.

One of the reasons we created this product is because most customers haven’t been able to fully customize their GSA with a portal, partly because Google didn’t design it to be customizable in this way.

Q. What versions of Liferay does this product support?

A. It supports version 6.2. If you have another version you’d like to integrate with, you can follow up with our team and we can discuss the possibility of working with other versions.

Q. Do you have a connector for IBM WCM?

A. Fishbowl does not have a connector, but Google has a number of connectors that can integrate with many different types of software.

Q. Are you talking about WebCenter Portal or WCM?

A. This connector is designed for WebCenter Portal. If you’re talking about WCM as in SiteStudio or WebCenter Content, we have done a number of projects with those programs. This particular product wouldn’t apply to those situations, but we have other connectors that would work with programs such as WebCenter Content.

Q. Where is the portlet deployed? Is it on the same managed node?

A. The portlets are deployed on the portlet server in WebCenter Portal.

Q. Where can we get the documentation for this product?

A. While the documentation is not publically available, we do have a product page on the website that includes a lot of information on the Portal Search Suite. Contact your Fishbowl representative if you’d like to learn more about it.

Q. What are the server requirements?

A. WebCenter Portal 11g or Liferay 6.2 and Google Search Appliance 7.2.

Q. Does this product include the connector for indexing content?

A. No, this product does not include a connector. We do have a product called GSA Connector for WebCenter that indexes content and then allows you to integrate that content with a portal. Depending on how your portal is configured, you could also crawl the portal just like you would in a regular website. However, this product focuses exclusively on serving and not on indexing.

Q. How many portals will a GSA support? I have several WebCenter Content domains on the same server.

A. The GSA is licensed according to number of content items, not number of sources. You purchase a license for a certain number of content items and then it doesn’t matter how many domains the content is coming from.

The post The Benefits of Integrating a Google Search Appliance with an Oracle WebCenter or Liferay Portal appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

My Oracle Support Upgrade Complete

Joshua Solomin - Thu, 2014-10-23 09:34
Untitled Document

GPIcon
We upgraded My Oracle Support on October 10, 2014. This upgrade brings changes to help you work more effectively with Oracle Support.

Among the areas you will notice enhancements are:

  • The My Oracle Support customer experience
  • My Oracle Support Chat
  • Knowledge Management
  • Cloud Portal
For details about the latest features visit the My Oracle Support User Resource Center.

 

 

OCP 12C – SQL Enhancements

DBA Scripts and Articles - Thu, 2014-10-23 08:20

Extended Character Data Type Columns In this release Oracle changed the maximum sixe of three data types  In Oracle 12c if you set a VARCHAR2 to 4000 bytes or less it is stored inline, if you set it to more than 4000 bytes then it is transformed in extended character data type and stored out [...]

The post OCP 12C – SQL Enhancements appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Deploying a Private Cloud at Home — Part 6

Pythian Group - Thu, 2014-10-23 07:35

Today’s blog post is part six of seven in a series dedicated to Deploying a Private Cloud at Home, where I will be demonstrating how to configure controller node with legacy networking ad OpenStack dashboard for webgui. Feel free to check out part five where we configured compute node with OpenStack services.

  1. First load the admin variables admin-openrc.sh
    source /root/admin-openrc.sh
  2. Enable legacy networking
    openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.api.API
    openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api nova
  3. Restart the Compute services
    service openstack-nova-api restart
    service openstack-nova-scheduler restart
    service openstack-nova-conductor restart
  4. Create the IP pool which will be assigned to the instances we will launch later. My network is 192.168.1.0/24. I took a subpool of the range and I am using that subnet to assign IPs to the VMs. As the VMs will be on my shared network I want the ip in the same range my other systems on the network.
    Here I am using the subnet of 192.168.1.16/28
  5. Create a network
    nova network-create vmnet --bridge br0 --multi-host T --fixed-range-v4 192.168.1.16/28
  6. Verify networking by listing the network
    nova net-list
  7. Install dashboard. Dashboard gives you webui to manage OpenStack instances and services. As we will be using the default configuration I am not going in detail with this.
    yum install -y mod_wsgi openstack-dashboard
  8. Update the ALLOWED_HOSTS in local_settings to include the addresses you wish to access the dashboard from. I am running these in my Intranet so I allowed every host in my network. But you can specify which hosts you want to give access.
    ALLOWED_HOSTS = ['*']
  9. Start and enable Apache web server
    service httpd start
    chkconfig httpd on
  10. You can now access the dashboard at http://controller/dashboard

 

This completes the configuration of OpenStack private cloud. We can use the same guide for RackSpace private cloud as it too is based on OpenStack Icehouse, but that is for another time.

Now that we have a working PaaS cloud, we can configure any SaaS on top of it, but that will require another series altogether.

Stay tuned for part seven, our final post in the series Deploying Private Cloud at Home, where I will be sharing scripts that will automate the installation and configuration of controller and compute nodes.

Categories: DBA Blogs

SQL Server failover cluster, VSphere, & SCSI-3 reservation nightmares

Yann Neuhaus - Wed, 2014-10-22 22:53

When I have to install a virtualized SQL Server FCI at a customer place as an SQL Server consultant, the virtualized environment usally is ready. I guess this is the same for most part of the database consulting people. Since we therefore lack practice, I have to admit that we do not always know the good configuration settings to apply to the virtualized layer in order to correctly run our SQL Server FCI architecture.

A couple of days ago, I had an interesting case where I had to help my customer to correctly configure the storage layer on VSphere 5.1. First, I would like to thank my customer because I seldom have the opportunity to deal with VMWare (other than via my personal lab).

The story begins with a failover testing that fails randomly on a SQL Server FCI after switching the SQL Server volumes from VMFS to RDM in a physical compatibility mode. We had to switch because the first configuration was installed in CIB configuration (aka Cluster-In-Box configuration). As you certainly know, it does not provide a full additional layer of high availability over VMWare in this case, because all the virtual machines are on the same host. So we decided to move to a CAB configuration (Cluster across Box) scenario that is more reliable than the first configuration.

In the new configuration, a failover triggers a Windows 170 error randomly with this brief description: "The resource is busy". At this point, I suspected that the ISCI-3 reservation was not performed correctly, but the cluster failover validate report didn’t show any errors concerning the SCSI-3 reservation. I then decided to generate the cluster log to see if I could find more information about my problem - and here is what I found:

 

00000fb8.000006e4::2014/10/07-17:50:34.116 INFO [RES] Physical Disk : HardDiskpQueryDiskFromStm: ClusterStmFindDisk returned device='?scsi#disk&ven_dgc&prod_vraid#5&effe51&0&000c00#{53f56307-b6bf-11d0-94f2-00a0c91efb8b}' 00000fb8.000016a8::2014/10/07-17:50:34.116 INFO [RES] Physical Disk : ResHardDiskArbitrateInternal request Not a Space: Uses FastPath 00000fb8.000016a8::2014/10/07-17:50:34.116 INFO [RES] Physical Disk : ResHardDiskArbitrateInternal: Clusdisk driver handle or event handle is NULL. 00000fb8.000016a8::2014/10/07-17:50:34.116 INFO [RES] Physical Disk : HardDiskpQueryDiskFromStm: ClusterStmFindDisk returned device='?scsi#disk&ven_dgc&prod_vraid#5&effe51&0&000d00#{53f56307-b6bf-11d0-94f2-00a0c91efb8b}' 00000fb8.000006e4::2014/10/07-17:50:34.117 INFO [RES] Physical Disk : Arbitrate - Node using PR key a6c936d60001734d 00000fb8.000016a8::2014/10/07-17:50:34.118 INFO [RES] Physical Disk : Arbitrate - Node using PR key a6c936d60001734d 00000fb8.000006e4::2014/10/07-17:50:34.120 INFO [RES] Physical Disk : HardDiskpPRArbitrate: Fast Path arbitration... 00000fb8.000016a8::2014/10/07-17:50:34.121 INFO [RES] Physical Disk : HardDiskpPRArbitrate: Fast Path arbitration... 00000fb8.000016a8::2014/10/07-17:50:34.122 WARN [RES] Physical Disk : PR reserve failed, status 170 00000fb8.000006e4::2014/10/07-17:50:34.122 INFO [RES] Physical Disk : Successful reserve, key a6c936d60001734d 00000fb8.000016a8::2014/10/07-17:50:34.123 ERR   [RES] Physical Disk : HardDiskpPRArbitrate: Error exit, unregistering key... 00000fb8.000016a8::2014/10/07-17:50:34.123 ERR   [RES] Physical Disk : ResHardDiskArbitrateInternal: PR Arbitration for disk Error: 170. 00000fb8.000016a8::2014/10/07-17:50:34.123 ERR   [RES] Physical Disk : OnlineThread: Unable to arbitrate for the disk. Error: 170. 00000fb8.000016a8::2014/10/07-17:50:34.124 ERR   [RES] Physical Disk : OnlineThread: Error 170 bringing resource online. 00000fb8.000016a8::2014/10/07-17:50:34.124 ERR   [RHS] Online for resource AS_LOG failed.

 

An arbitration problem! It looks to be related to my first guess doesn’t it? Unfortunately I didn’t have access to the vmkernel.log to see potential reservation conflicts. After that, and this is certainly the funny part of this story (and probably the trigger for this article), I took a look at the multi-pathing configuration for each RDM disk. The reason for this is that I remembered some of the conversations I had with one of my friends (he will surely recognize himself Smile) in which we talked about ISCSI-3 reservation issues with VMWare.

As a matter of fact, the path selection policy was configured to round robin here. According to the VMWare KB1010041, PSP_RR is not supported with VSphere 5.1 for Windows failover cluster and shared disks. This is however the default configuration when creating RDM disks with EMC VNX storage which is used by my customer. After changing this setting for each shared disk, no problem occurred!

My customer inquired about the difference between VMFS and RDM disks. I don’t presume to be a VMWare expert because I'm not, but I know that database administrators and consultants cannot just ignore anymore how VMWare (or Hyper-V) works.

Fortunately, most of the time there will be virtual administrators with strong skills, but sometimes not and in this case, you may feel alone facing such problem. So the brief answer I gave to the customer was the following: If we wouldn’t use physical mode RDMs and used VMDKs or virtual mode RDMs instead, the SCSI reservation would be translated to a file lock. In CIB configuration, there is no a problem, but not for CAB configuration, as you can imagine. Furthermore, using PSP_RR with older versions than VSphere 5.5 will free the reservation and can cause potential issues like the one described in this article.

Wishing you a happy and problem-free virtualization!

Using the tc Server build pack for Pivotal Cloud Foundry 1.3

Pas Apicella - Wed, 2014-10-22 19:09
On Pivotal network you will find various build packs you can download and apply to PCF and use for your applications outside of the shipped build packs, using the link below.

https://network.pivotal.io/products/pivotal-cf

I am going to show how you would take one of these build packs , install it and then consume it from an application. In this demo I am going to use "tc server buildpack (offline) v2.4"

1. Log in as admin user and upload the build pack as shown below. I am adding this build pack in the last position which is position 6.

[Tue Oct 21 20:36:01 papicella@:~/cf/buildpacks ] $ cf create-buildpack tc_server_buildpack_offline tc-server-buildpack-offline-v2.4.zip 6
Creating buildpack tc_server_buildpack_offline...
OK

Uploading buildpack tc_server_buildpack_offline...
OK

2. View buildpacks, which should show the one we just uploaded above.

[Thu Oct 23 11:15:18 papicella@:~/cf/APJ1 ] $ cf buildpacks
Getting buildpacks...

buildpack                     position   enabled   locked   filename
java_buildpack_offline        1          true      false    java-buildpack-offline-v2.4.zip
ruby_buildpack                2          true      false    ruby_buildpack-offline-v1.1.0.zip
nodejs_buildpack              3          true      false    nodejs_buildpack-offline-v1.0.1.zip
python_buildpack              4          true      false    python_buildpack-offline-v1.0.1.zip
go_buildpack                  4          true      false    go_buildpack-offline-v1.0.1.zip
php_buildpack                 5          true      false    php_buildpack-offline-v1.0.1.zip
tc_server_buildpack_offline   6          true      false    tc-server-buildpack-offline-v2.4.zip

3. Push application using your buildpack uploaded above, below is a simple manifest which refers to the build pack I uploaded.

manifest.yml

applications:
- name: pcfhawq
  memory: 512M
  instances: 1
  host: pcfhawq
  domain: yyyy.fe.dddd.com
  path: ./pcfhawq.war
  buildpack: tc_server_buildpack_offline
  services:
   - phd-dev

[Thu Oct 23 11:36:26 papicella@:~/cf/buildpacks ] $ cf push -f manifest-apj1.yml
Using manifest file manifest-apj1.yml

Creating app pcfhawq-web in org pas-org / space apple as pas...
OK

Creating route yyyy.apj1.dddd.gopivotal.com...
OK

Binding pcfhawq-web.yyyy.fe.dddd.com to pcfhawq-web...
OK

Uploading pcfhawq-web...
Uploading app files from: pcfhawq.war
Uploading 644.1K, 181 files
OK
Binding service phd-dev to app pcfhawq-web in org pas-org / space apple as pas...
OK

Starting app pcfhawq-web in org pas-org / space apple as pas...
OK
-----> Downloaded app package (5.6M)
-----> Java Buildpack Version: v2.4 (offline) | https://github.com/pivotal-cf/tc-server-buildpack.git#396ad0a
-----> Downloading Open Jdk JRE 1.7.0_60 from http://download.run.pivotal.io/openjdk/lucid/x86_64/openjdk-1.7.0_60.tar.gz (found in cache)
       Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (1.3s)
-----> Downloading Spring Auto Reconfiguration 1.4.0_RELEASE from http://download.run.pivotal.io/auto-reconfiguration/auto-reconfiguration-1.4.0_RELEASE.jar (found in cache)
       Modifying /WEB-INF/web.xml for Auto Reconfiguration
-----> Downloading Tc Server Instance 2.9.6_RELEASE from http://download.run.pivotal.io/tc-server/tc-server-2.9.6_RELEASE.tar.gz (found in cache)
       Instantiating tc Server in .java-buildpack/tc_server (3.4s)
-----> Downloading Tc Server Lifecycle Support 2.2.0_RELEASE from http://download.run.pivotal.io/tomcat-lifecycle-support/tomcat-lifecycle-support-2.2.0_RELEASE.jar (found in cache)
-----> Downloading Tc Server Logging Support 2.2.0_RELEASE from http://download.run.pivotal.io/tomcat-logging-support/tomcat-logging-support-2.2.0_RELEASE.jar (found in cache)
-----> Downloading Tc Server Access Logging Support 2.2.0_RELEASE from http://download.run.pivotal.io/tomcat-access-logging-support/tomcat-access-logging-support-2.2.0_RELEASE.jar (found in cache)
-----> Uploading droplet (45M)

1 of 1 instances running

App started

Showing health and status for app pcfhawq-web in org pas-org / space apple as pas...
OK

requested state: started
instances: 1/1
usage: 1G x 1 instances
urls: pcfhawq-web.yyyy.fe.dddd.com

     state     since                    cpu    memory         disk
#0   running   2014-10-23 11:37:56 AM   0.0%   398.6M of 1G   109.2M of 1G

4. Verify within the DEV console the application is using the build pack you targeted.


More Information

Buildpacks
http://docs.pivotal.io/pivotalcf/buildpacks/index.htmlhttp://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Keynote: The Year After the Year of the MOOC

Michael Feldstein - Wed, 2014-10-22 15:23

Here’s a talk I gave recently at the CIT event in Duke. In addition to being very nice and gracious, the Duke folks impressed me with how many faculty they have who are not only committed to improving their teaching practices but dedicating significant time to it as a core professional activity.

Anyway, for what it’s worth, here’s the talk:

Click here to view the embedded video.

The post Keynote: The Year After the Year of the MOOC appeared first on e-Literate.

Reminder: first Arizona Oracle User Group meeting tomorrow

Bobby Durrett's DBA Blog - Wed, 2014-10-22 14:02

Fellow Phoenicians (citizens of the Phoenix, Arizona area):

This is a reminder that tomorrow is the first meeting of the newly reborn (risen from the ashes) Arizona Oracle User Group.  Here are the meeting details: url

I hope to meet some of my fellow Phoenix area DBAs tomorrow afternoon.

– Bobby

 


Categories: DBA Blogs

OCP 12C – DataPump, SQL*Loader, External Tables Enhancements

DBA Scripts and Articles - Wed, 2014-10-22 13:57

Oracle DataPump Enhancements Full Transportable Export and Import of Data In Oracle 12c you now have the possibility to create full transportable exports and imports. A full transportable export contains all objects and data needed to create a copy of the database. To create a fully transportable export of your database you need to specify [...]

The post OCP 12C – DataPump, SQL*Loader, External Tables Enhancements appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Deadlocks

Jonathan Lewis - Wed, 2014-10-22 11:34

A recent question on the OTN forum asked about narrowing down the cause of deadlocks, and this prompted me to set up a little example. Here’s a deadlock graph of a not-quite-standard type:


Deadlock graph:
                                          ---------Blocker(s)--------  ---------Waiter(s)---------
Resource Name                             process session holds waits  process session holds waits
TX-00040001-000008EC-00000000-00000000         50     249     X             48       9           X
TX-000A001F-000008BC-00000000-00000000         48       9     X             50     249           S


My session (the one that dumped the trace file) is 249, and I was blocked by session 9. The slight anomaly, of course, is that I was waiting on a TX lock in mode 4 (Share) rather than the more common mode 6 (eXclusive).

There are plenty of notes on the web these days to tell you that this wait relates in some way to a unique index (or some associated referential integrity) or an ITL wait. (Inevitably there are a couple of other less frequently occurring and less well documented reasons, such as waits for tablespaces to change state but I’m going to ignore those for now). The question is, how do I tell whether this example is related to uniqueness (indexing) or ITLs ? For recent versions of Oracle the answer is in the rest of the trace file which now hold the recent wait history for the session that dumped the trace file.

Reading down my trace file, past the line which says “Information for THIS session”, I eventually get to this:


    Current Wait Stack:
     0: waiting for 'enq: TX - allocate ITL entry'
        name|mode=0x54580004, usn<<16 | slot=0xa001f, sequence=0x8bc
        wait_id=80 seq_num=81 snap_id=1
 

So it didn’t take me long to find out I had an ITL problem (which should be a pretty rare occurrence in newer versions of Oracle); but there’s more:

...

    There is at least one session blocking this session.
      Dumping 1 direct blocker(s):
        inst: 1, sid: 9, ser: 40192
      Dumping final blocker:
        inst: 1, sid: 9, ser: 40192
    There are 2 sessions blocked by this session.
    Dumping one waiter:
      inst: 1, sid: 357, ser: 7531
      wait event: 'enq: TX - allocate ITL entry'

...

    Session Wait History:
        elapsed time of 0.000035 sec since current wait
     0: waited for 'enq: TX - allocate ITL entry'
        name|mode=0x54580004, usn&lt;&lt;16 | slot=0x5000c, sequence=0xa39
        wait_id=79 seq_num=80 snap_id=1
        wait times: snap=5.002987 sec, exc=5.002987 sec, total=5.002987 sec
        wait times: max=5.000000 sec
        wait counts: calls=2 os=2
        occurred after 0.000047 sec of elapsed time
     1: waited for 'enq: TX - allocate ITL entry'
        name|mode=0x54580004, usn&lt;&lt;16 | slot=0xa001f, sequence=0x8bc
        wait_id=78 seq_num=79 snap_id=1
        wait times: snap=1 min 4 sec, exc=1 min 4 sec, total=1 min 4 sec
        wait times: max=1 min 4 sec
        wait counts: calls=22 os=22
        occurred after 0.000032 sec of elapsed time

...
     8: waited for 'enq: TX - allocate ITL entry'
        name|mode=0x54580004, usn&lt;&lt;16 | slot=0x5000c, sequence=0xa39
        wait_id=71 seq_num=72 snap_id=1
        wait times: snap=5.001902 sec, exc=5.001902 sec, total=5.001902 sec
        wait times: max=5.000000 sec
        wait counts: calls=2 os=2
        occurred after 0.000042 sec of elapsed time
     9: waited for 'enq: TX - allocate ITL entry'
        name|mode=0x54580004, usn&lt;&lt;16 | slot=0xa001f, sequence=0x8bc
        wait_id=70 seq_num=71 snap_id=1
        wait times: snap=4.005342 sec, exc=4.005342 sec, total=4.005342 sec
        wait times: max=4.000000 sec
        wait counts: calls=2 os=2
        occurred after 0.000031 sec of elapsed time

...

    Sampled Session History of session 249 serial 3931
    ---------------------------------------------------

    The history is displayed in reverse chronological order.

    sample interval: 1 sec, max history 120 sec
    ---------------------------------------------------
      [9 samples,                                          11:14:50 - 11:14:58]
        waited for 'enq: TX - allocate ITL entry', seq_num: 81
          p1: 'name|mode'=0x54580004
          p2: 'usn&lt;= 8 sec (still in wait)
      [5 samples,                                          11:14:45 - 11:14:49]
        waited for 'enq: TX - allocate ITL entry', seq_num: 80
          p1: 'name|mode'=0x54580004
          p2: 'usn&lt;&lt;16 | slot'=0x5000c
          p3: 'sequence'=0xa39
          time_waited: 5.002987 sec (sample interval: 4 sec)
...

The little report that follows the initial wait state shows that the situation was a little messy – session 9 was my first and last blocker, but there was another session tangled up in the chain of waits, session 357.

Following this there’s a set of entries from my v$session_wait_history - and if you look carefully at the slot and sequence that appears on the second line of each wait you’ll notice that my waits have been alternating between TWO other sessions/transactions before I finally crashed.

Finally there’s a set of entries for my session extracted from v$active_session_history. (Question: I’m only allowed to query v$active_session_history if I’ve licensed the Diagnostic Pack – so should I shut my eyes when I get to this part of the trace file ;) This breakdown also shows my session alternating between waits on the two different blockers, giving me a pretty good post-event breakdown of what was going on around the time of the deadlock.


MySQL: Troubleshooting an Instance for Beginners

Pythian Group - Wed, 2014-10-22 09:17
IMG_1299

So as you may know, my new position involves the MySQL world, so I’m in the task of picking up the language and whereabouts of this DBMS, and my teamate Alkin Tezuysal (@ask_dba on Twitter) has a very cool break and fix lab which you should check out if you are going to Percona Live London 2014, he will be running this lab, so be sure to don’t miss out.

So the first thing I tried was to bring up the service, but to my surprise, the MySQL user didn’t exist. So the first thing I did was create the user.

Note: Whenever you see “…”, it is to shorten the output.

[user-lab@ip-10-10-10-1 ~]$ service mysqld start
touch: cannot touch ‘/var/log/mysqld.log’: Permission denied
chown: invalid user: ‘mysql:mysql’
chmod: changing permissions of ‘/var/log/mysqld.log’: Operation not permitted
mkdir: cannot create directory ‘/var/lib/msql’: Permission denied
[user-lab@ip-10-10-10-1 ~]$ id mysql
id: mysql: no such user
[user-lab@ip-10-10-10-1 ~]$ sudo useradd mysql

So now that the user exists, I try to bring it up and we are back at square one as the initial configuration variable in the .cnf file is incorrect. But there is another problem, as there is more than one .cnf file.

[user-lab@ip-10-10-10-1 ~]$ sudo su -
Last login: Thu Jul 31 11:37:21 UTC 2014 on pts/0
Last failed login: Tue Oct 14 05:45:47 UTC 2014 from 60.172.228.40 on ssh:notty
There were 1269 failed login attempts since the last successful login.
[root@ip-10-10-10-1 ~]# service mysqld start
Initializing MySQL database: Installing MySQL system tables...
141014 17:05:46 [ERROR] /usr/libexec/mysqld: unknown variable 'tmpd1r=/var/tmp'
141014 17:05:46 [ERROR] Aborting

141014 17:05:46 [Note] /usr/libexec/mysqld: Shutdown complete

Installation of system tables failed! Examine the logs in
/var/lib/msql for more information.

...

 [FAILED]

In the Oracle world, it is easier to troubleshoot. Here in the MySQL world, the best way to see which .cnf file is being used, we do it with an strace command.

[root@ip-10-10-10-1 ~]# strace -e trace=open,stat /usr/libexec/mysqld
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libpthread.so.0", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libaio.so.1", O_RDONLY|O_CLOEXEC) = 3
...
stat("/etc/my.cnf", {st_mode=S_IFREG|0644, st_size=255, ...}) = 0
open("/etc/my.cnf", O_RDONLY)           = 3
stat("/etc/mysql/my.cnf", 0x7fffe4b38120) = -1 ENOENT (No such file or directory)
stat("/usr/etc/my.cnf", {st_mode=S_IFREG|0644, st_size=25, ...}) = 0
open("/usr/etc/my.cnf", O_RDONLY)       = 3
stat("/root/.my.cnf", {st_mode=S_IFREG|0644, st_size=33, ...}) = 0
open("/root/.my.cnf", O_RDONLY)         = 3
...
141014 17:12:05 [ERROR] Fatal error: Please read "Security" section of the manual to find out how to run mysqld as root!

So now I can see that the /usr/etc/my.cnf is the one with the incorrect wording variable, so we modify it to have it the correct one.

[root@ip-10-10-10-1 ~]# cat /usr/etc/my.cnf
[mysqld]
tmpd1r=/var/tmp
[root@ip-10-10-10-1 ~]# sed -i -e 's/tmpd1r/tmpdir/' /usr/etc/my.cnf
[root@ip-10-10-10-1 ~]# cat /usr/etc/my.cnf
[mysqld]
tmpdir=/var/tmp

Another try, but again the same result — but even worse this time, as there is no output. After digging around, I found that the place to look is the /var/log/mysqld.log and the problem was that some libraries belonged to root user, instead of the MySQL user.

[root@ip-10-10-10-1 ~]# service mysqld start
MySQL Daemon failed to start.
Starting mysqld:                                           [FAILED]
[root@ip-10-10-10-1 ~]# cat /var/log/mysqld.log
141014 17:16:33 mysqld_safe Starting mysqld daemon with databases from /var/lib/msql
141014 17:16:33 [Note] Plugin 'FEDERATED' is disabled.
/usr/libexec/mysqld: Table 'mysql.plugin' doesn't exist
141014 17:16:33 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.
141014 17:16:33 InnoDB: The InnoDB memory heap is disabled
141014 17:16:33 InnoDB: Mutexes and rw_locks use GCC atomic builtins
141014 17:16:33 InnoDB: Compressed tables use zlib 1.2.7
141014 17:16:33 InnoDB: Using Linux native AIO
/usr/libexec/mysqld: Can't create/write to file '/var/tmp/ib1rikjr' (Errcode: 13)
141014 17:16:33  InnoDB: Error: unable to create temporary file; errno: 13
141014 17:16:33 [ERROR] Plugin 'InnoDB' init function returned error.
141014 17:16:33 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
141014 17:16:33 [ERROR] Unknown/unsupported storage engine: InnoDB
141014 17:16:33 [ERROR] Aborting

141014 17:16:33 [Note] /usr/libexec/mysqld: Shutdown complete
[root@ip-10-10-10-1 ~]# perror 13
Error code 13: Permission denied
[root@ip-10-10-10-1 ~]# ls -l /var/lib/mysql/mysql/plugin.*
-rw-rw---- 1 root root 8586 Mar 13  2014 /var/lib/mysql/mysql/plugin.frm
-rw-rw---- 1 root root    0 Mar 13  2014 /var/lib/mysql/mysql/plugin.MYD
-rw-rw---- 1 root root 1024 Mar 13  2014 /var/lib/mysql/mysql/plugin.MYI
[root@ip-10-10-10-1 ~]# chown -R mysql:mysql /var/lib/mysql/mysql/

So I think, yey, I’m set and it will come up! I give it one more shot and, you guessed it, same result and different error :( This time around the problem seemed to be that the memory assigned is incorrect and we don’t have enough on the machine, so we change it.

[root@ip-10-10-10-1 ~]# service mysqld start
MySQL Daemon failed to start.
Starting mysqld:                                           [FAILED]
141014 17:36:15 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
141014 17:36:15 [Note] Plugin 'FEDERATED' is disabled.
141014 17:36:15 InnoDB: The InnoDB memory heap is disabled
141014 17:36:15 InnoDB: Mutexes and rw_locks use GCC atomic builtins
141014 17:36:15 InnoDB: Compressed tables use zlib 1.2.7
141014 17:36:15 InnoDB: Using Linux native AIO
141014 17:36:15 InnoDB: Initializing buffer pool, size = 100.0G
InnoDB: mmap(109890764800 bytes) failed; errno 12
141014 17:36:15 InnoDB: Completed initialization of buffer pool
141014 17:36:15 InnoDB: Fatal error: cannot allocate memory for the buffer pool
141014 17:36:15 [ERROR] Plugin 'InnoDB' init function returned error.
141014 17:36:15 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
141014 17:36:15 [ERROR] Unknown/unsupported storage engine: InnoDB
141014 17:36:15 [ERROR] Aborting
[root@ip-10-10-10-1 ~]# grep 100 /etc/my.cnf
innodb_buffer_pool_size=100G
[root@ip-10-10-10-1 ~]# sed -i -e 's/100G/256M/' /etc/my.cnf
[root@ip-10-10-10-1 ~]# grep innodb_buffer_pool_size /etc/my.cnf
innodb_buffer_pool_size=256M

Now, I’m not even expecting this instance to come up, and I am correct — It seems a filename has incorrect permissions.

[root@ip-10-10-10-1 ~]# service mysqld start
MySQL Daemon failed to start.
Starting mysqld:                                           [FAILED]
root@ip-10-10-10-1 ~]# cat /var/log/mysqld.log
...
141014 17:37:15 InnoDB: Initializing buffer pool, size = 256.0M
141014 17:37:15 InnoDB: Completed initialization of buffer pool
141014 17:37:15  InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name ./ibdata1
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
141014 17:37:15 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
[root@ip-10-10-10-1 ~]# ls -l /var/lib/mysql/ibdata1
-rw-rw---- 1 27 27 18874368 Mar 13  2014 /var/lib/mysql/ibdata1
[root@ip-10-10-10-1 ~]# ls -l /var/lib/mysql
total 83980
-rw-rw---- 1    27    27 18874368 Mar 13  2014 ibdata1
-rw-rw---- 1    27    27 33554432 Mar 13  2014 ib_logfile0
-rw-rw---- 1    27    27 33554432 Mar 13  2014 ib_logfile1
drwx------ 2 mysql mysql     4096 Mar 13  2014 mysql
drwx------ 2 root  root      4096 Mar 13  2014 performance_schema
drwx------ 2 root  root      4096 Mar 13  2014 test
[root@ip-10-10-10-1 ~]# chown -R mysql:mysql /var/lib/mysql

Now, I wasn’t even expecting the service to come up, but to my surprise it came up!

[root@ip-10-10-10-1 ~]# service mysqld start
Starting mysqld:                                           [  OK  ]

So now, what I wanted to do was connect and start working, but again, there was another error! I saw that it was related to the socket file mysql.sock, so I changed it to the correct value in our .cnf file

[root@ip-10-10-10-1 ~]# mysql
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)
[root@ip-10-10-10-1 ~]# service mysql status
mysql: unrecognized service
[root@ip-10-10-10-1 ~]# service mysqld status
mysqld (pid  5666) is running...
[root@ip-10-10-10-1 ~]# ls -l /tmp/mysql.sock
ls: cannot access /tmp/mysql.sock: No such file or directory
[root@ip-10-10-10-1 ~]# grep socket /var/log/mysqld.log | tail -n 1
Version: '5.5.36'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)
[root@ip-10-10-10-1 ~]# lsof -n | grep mysqld | grep unix
mysqld    5666    mysql   12u     unix 0xffff880066fbea40       0t0     981919 /var/lib/mysql/mysql.sock
[root@ip-10-10-10-1 ~]# cat /etc/my.cnf
[mysqld]
datadir=/var/lib/mysql

innodb_data_file_path=ibdata1:18M
innodb_buffer_pool_size=256M
innodb_log_file_size=32M
sort_buffer_size=60M

[client]
socket=/tmp/mysql.sock
[root@ip-10-10-10-1 ~]# vi /etc/my.cnf
[root@ip-10-10-10-1 ~]# service mysqld restart
Stopping mysqld:                                           [  OK  ]
Starting mysqld:                                           [  OK  ]
[root@ip-10-10-10-1 ~]# mysql -p
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.5.36 MySQL Community Server (GPL)

Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

Conclusion

As you can see, there are different ways to troubleshoot the startup of a MySQL instance, so hope this helps you out in your journey when you are starting to use this DBMS and also if you know of another way, let me know in the comment section below.

Please note that this blog post was originally published on my personal blog.

Categories: DBA Blogs

OCP 12C – Partitioning Enhancements

DBA Scripts and Articles - Wed, 2014-10-22 09:13

Online Partition operations Table Partitions and subpartitions can now be moved online. [crayon-544ca0e232639834380076/] Compression options can also be added during an online partition move. [crayon-544ca0e232644715031897/] Reference Partitioning Enhancements Truncate or Exchange Partition with Cascade option With Oracle 12c, it is now possible to use the CASCADE option to cascade operations to a child-referenced table when [...]

The post OCP 12C – Partitioning Enhancements appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs