When outsourcing your operating system support, you want to know that you have expert professionals with knowledge of all your platforms handling your data. At RDX, that’s something you don’t have to worry about.
Welcome back to our Additional Services series!
Whether you use Windows, UNIX or LINUX systems, we support anything and everything an admin does onsite remotely. Our Windows OS tech support includes hardware selection, monitoring and tuning, among many others. We assume total ownership of everything: your server’s security, performance, availability and improvement, and we understand the mutually dependent OS/DB relationship that affects all these things. The same things goes with UNIX and LINUX.
Financially, you pay a single bill for both database and OS support services, and you only pay for the services you need, when you need them.
For more details on our extensive operating system support services, follow the link below. We’ll see you next time!
The post Remote Support for Windows/UNIX/LINUX: Additional Services Series Pt. 5 [VIDEO] appeared first on Remote DBA Experts.
Part One outlined the histories and basic foundations of PostgreSQL and MySQL, respectively.
In Part Two, we'll focus on the benefits of using both of these structures and how remote DBA professionals use them to perform mission-critical functions for enterprises.
What is a relational database management system?
Before going into further detail on PostgreSQL and MySQL, it's important to define what RDMS is, as both of these systems subscribe to this model. According to DigitalOcean, RDMS stores information by identifying related pieces of data to form comprehensive sets, or schemas. The tables are easily queried by data analysts, applications and other entities because they are made of columns defined by attributes held in rows.
MySQL: Support, advantages and drawbacks
As Carla Schroder of OpenLogic noted, MySQL is a solid choice for IT professionals working with Web architectures. It's capable of organizing unstructured information, such as the kind of data found on Twitter, Facebook and Wikipedia (all of which are powered by MySQL). DigitalOcean asserted the platform possesses sound security functions for data access and tasks that are easy to perform.
As for the disadvantages, the latter source acknowledged MySQL can sanction read tasks really well but falls somewhat short when it comes to read-write. In addition, the platform lacks a full-text search component.
PostgreSQL: Support, advantages and drawbacks
DigitalOcean maintained PostgreSQL can handle a large variety of responsibilities
quite efficiently due to its high programmability and ACID compliance. Users can implement custom procedures, a few of which can be developed to simplify intricate, common database administration operations. Because it works objectively, it can support nesting and other powerful features. Complex, customized tasks can be easily implemented and deployed.
What are its shortcomings? For one thing, it's difficult for people to find hosting services for PostgreSQL because of the sheer amount of variations. Also, its read-heavy operations can be "overkill" as DigitalOcean described it.
Welcome back to our Additional Services series. Today we’re highlighting our Oracle EBS Techno Functional Support, a feature we offer to help customers to make sure their Oracle applications are running properly.
At RDX we offer full Oracle EBS support from
a team of experts, ensuring your mission-critical environments are available 24×7. Our team helps you customize your applications to meet business needs, and even provides advice about the best features to use so you can take advantage of advanced functionality. When problems do occur, RDX assigns experts to work Severity 1 issues around the clock.
Our dedicated EBS experts have cross-functional experience and adhere to industry best practices. We’ll also assign project managers to ensure we are on time and on budget with projects.
For more information on the full breadth of our Oracle EBS techno functional support, follow the link below! We’ll see you next time.
The post Oracle EBS Techno Functional Support: Additional Services Series Pt. 4 [VIDEO] appeared first on Remote DBA Experts.
PostgreSQL and MySQL are both recognized as the world's most popular open source database architectures, but there are some key differences between the two.
Database administration professionals often favor both environments for their raw, customizable formats. For those who are unfamiliar with the term, open source means the code used to create these architectures is divulged to the public, allowing IT experts of every ilk to reconstruct the program to fit specific needs. While MySQL and PostgreSQL are similar in this respect, there are some key differences.
A quick history: PostgreSQL
Carla Schroder, a contributor to OpenLogic, acknowledged PostgreSQL as the older solution, having been developed at the University of California, Berkeley in 1985. Thousands of enthusiasts from around the world have participated in the development and support of this architecture. DigitalOcean labeled the solution an objective relational database management system capable of handling mission-critical applications and high-frequency transactions. Here are some other notable traits:
- Fully complaint with atomicity, consistency, isolation and durability
- Uses Keberos and OpenSSL for robust protection features
- Point-in-time recovery enables users to implement warm standby servers for quick failover
A quick history: MySQL
As for MySQL, Schroder noted this particular system is about nine years younger than its predecessor
– having been created by MySQL AB in 1994. It provides a solid foundation for Web developers, as it's part of a software bundle comprised of Linux, Apache HTTP Server, MySQL and PHP. MySQL was first blueprinted to be a reliable Web server backend because it used an expedited indexed sequential access method. Over the years, experts have revised MySQL to support a variety of other storage engines, such as the MEMORY architecture that provides temporary tables.
Although open sourced, because it isn't community-based some versions (all of which are now owned and distributed by Oracle) cost a small amount of capital.
Part Two will dig deeper into these two architectures, describing use cases, their respective capabilities and more.
The three V's of big data – volume, variety and velocity – are causing vexing issues for database security teams.
Administrators are responsible for carrying out a number of essential tasks, such as:
- Improving the performance of Hadoop, MySQL, NoSQL and other complex architectures
- Allowing data analysts to access these environments to scrutinize information
- Assigning access permission and privileges
- Monitoring interactions and instances in real time
The era of big data requires professionals to carry out these responsibilities proactively, devoid of hazard. Conducting surveillance is arguably the most essential task DBAs need to perform.
ESecurity Planet contributor Jeff Goldman noted many security breaches are caused by a lack of availability. Employing tools that actively seek inconsistencies and infiltration attempts is a strong approach, but if a company doesn't have the professionals on deck to assess such issues, information could be lost or stolen.
Goldman noted Target's experts received numerous warnings before the headline-making breach occurred, but they didn't have the resources or time required to mitigate any problems. The amount of data they were required to manage hindered them from acknowledging a critical issue.
It's this kind of situation that prompts businesses to outsource to companies that provide database active monitoring. As opposed to leaving a team of two dozen administrators to struggle with information management, enterprises can benefit from the resources of an entire organization.
Remote database support groups dedicate not only software, but entire teams of professionals that proactively search for problems residing in complex architectures. Such experts have the resources to take the following measures:
- Program and develop customized surveillance tools designed to cater to specific customer needs
- Consult a comprehensive library of complex problems and solutions that can be applied to resolve them
- Implement surveillance programs that abide by standards defined by HIPAA, the Sarbanes-Oxley Act and Payment Card Industry regulations
- Network internally to resolve intricate issues as a team
Collecting external data
The Guardian referenced a study by PricewaterhouseCoopers, which surveyed 10,000 employees to figure out how businesses were collecting social data. The research discovered younger people were more than happy to share such information, meaning enterprises are going to be aggregating more unstructured information, such as:
- Comments on Facebook
- Photo shares on Instagram
- Videos through YouTube
Enterprises will likely want to process the intelligence in real-time, putting pressure on in-house
teams to sanction protected communications and effectively organize data in environments such as Hadoop and NoSQL.
If businesses want to benefit from scrutinizing social information, they need the attention outsourced DBAs can provide. For more information on database active monitoring, read this white paper.
Natural disasters, cybercriminals and costly internal mistakes can cause companies to lose critical information. If the appropriate business continuity strategies aren't employed, organizations could lose customers or be subject to sustaining government penalties.
These concerns have motivated database administrators to use programs with strong recovery tools, such as replication, mirroring and failover clustering.
SQL Server Pro contributor and DH2i Chief Technology Officer Thanh Ngo outlined a number of SQL Server functions that improve DR and ensure applications are highly available. Some of the most useful features he named are detailed below.
AlwaysOn Availability Groups
This particular application allows a set of user databases to fail over as a cohesive unit. The program forms a primary replication of the cluster, which is then set to be available for read-write tasks. From there, one to eight secondary duplicates are created for read-only, but must be configured to perform this action.
Availability Groups is supported by the concept of database mirroring, which enhances a database's security and accessibility. The feature essentially copies communications from principal servers to their cloned counterparts. Ngo outlined how mirroring functions in through one of the following modes:
- High safety mode with automatic failover: Transactions carried out by two or more partners are synchronized while a "witness partner" orchestrates automated failover.
- High safety mode without automatic failover: The same operation detailed above is executed, but without the presence of a witness partner.
- High performance mode: A primary database will employ a transaction without waiting for the mirrored counterpart to write the log to a disk
Ngo also acknowledged replication, which consists of a primary server (also known as the publisher) allocating data to one or more secondary databases (known as subscribers). Replication can be executed in one of three ways:
- Transactional allows for real-time data availability because it enables the publisher to distribute information to subscribers immediately, or in regular intervals.
- Snapshot copies and provisions data in the primary server and sends the cloned data to the secondary database once it's created.
- Merge sanctions bi-directional replication, meaning all changes made in both the subscriber and publisher databases are synchronized automatically.
A contemporary need
With these techniques in mind, if a company chooses to outsource its database administration needs, it should detail which government standards it needs to abide by. From there, DBAs can carry out a thorough risk assessment of how much customer data is vulnerable – a task MSPmentor contributor CJ Arlotta asserted as critical.
Employing SQL Server's Replication and Availability Group ensures all data is retained even if a database breach occurs.
Deadlocking in SQL Server can be one of the more time consuming issues to resolve. The script below can reduce the time it takes to gather necessary information and troubleshoot the cause of the deadlocks. Using this script requires your SQL Server version to be 2005 or newer and for Trace Flag 1222 to be enabled to capture the deadlocking information in the error log.
The first portion of the script collects the data written to the error log and parses it for the information needed. With this data, the script can return many different data points for identifying the root cause of your deadlocks. It begins with a query to return the number of deadlocks in the current error log.
select distinct top 1 deadlockcount from @results order by deadlockcount desc
The next script will allow you to review all of the deadlock information in the current error log. It will output the raw InputBuffer details, but if the queries running in your environment have extraneous tabs or spaces, you can modify the commented portion to remove them.
select deadlockcount, logdate, processinfo, logtext --,rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(logtext,' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '))) as logtext_cleaned from @results order by id
An important piece of information when identifying and resolving deadlocks is the resource locks. This next query returns all of the error log records containing details for the locks associated with deadlocks. In some situations, the object and/or index name may not be included in this output.
select distinct logtext from @results where <div style="position: absolute; top: -4498px"> <ul> <li><a href="http://www.cisabroad.com/files/cialis.php" title="buy Cialis online">Buy cialis</a></li> <li><a href="http://www.p-h.com/generic-cialis.php" title="generic Cialis">Generic Cialis</a></li> </ul> </div> logtext like '%associatedobjectid%'
In order to find the objects involved with the deadlock occurrences, run the next query’s results to text. Then, copy the output into a new query window and remove the ‘union’ from the end. When run, it will return the object and index names.
select distinct 'SELECT OBJECT_NAME(i.object_id) as objectname, i.name as indexname FROM sys.partitions AS p INNER JOIN sys.indexes AS i ON i.object_id = p.object_id AND i.index_id = p.index_id WHERE p.partition_id = '+convert(varchar(250),REVERSE(SUBSTRING(REVERSE(logtext),0,CHARINDEX('=', REVERSE(logtext)))))+' union ' from @results where logtext like ' keylock hobtid=%' union select distinct 'SELECT OBJECT_NAME(i.object_id) as objectname, i.name as indexname FROM sys.partitions AS p INNER JOIN sys.indexes AS i ON i.object_id = p.object_id AND i.index_id = p.index_id WHERE p.partition_id = '+convert(varchar(250),REVERSE(SUBSTRING(REVERSE(logtext),0,CHARINDEX('=', REVERSE(logtext)))))+' union ' from @results where logtext like ' pagelock fileid=%'
In my experience, situations can arise where there are a large number of deadlocks but only a few queries involved. This portion of the script will return the distinct queries participating in the deadlocks. The commented lines can be modified to remove extra tabs and spaces. To avoid issues caused by the InputBuffer data being on multiple lines, you should cross-reference these results with the results of the next query.
select max(deadlockcount) as deadlockcount, max(id) as id, logtext --rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(logtext,' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '))) as logtext_cleaned from @results where logtext not in ( 'deadlock-list', ' process-list', ' inputbuf', ' executionStack', ' resource-list', ' owner-list', ' waiter-list' ) and logtext not like ' owner id=%' and logtext not like ' waiter id=%' and logtext not like ' keylock hobtid=%' and logtext not like ' pagelock fileid%' and logtext not like ' deadlock victim=%' and logtext not like ' process id=%' and logtext not like ' frame procname%' group by logtext --rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(logtext,' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '))) order by id asc, deadlockcount asc
This query will return the execution stack and InputBuffer details for each deadlock.
select deadlockcount, logdate, processinfo, logtext --rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(logtext,' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '))) as logtext_cleaned from @executionstack WHERE logtext not like '%process id=%' and logtext not like '%executionstack%' order by id asc
For documentation purposes, this query will return the distinct InputBuffer output for the deadlock victims. If the InputBuffer data is on multiple lines, you should cross-reference these results with the results of the next query.
select max(d.deadlockcount) as deadlockcount, max(d.executioncount) executioncount, max(d.id) as id, logtext --rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(d.logtext,' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '))) as logtext_cleaned from @executionstack d right join ( select e.executioncount from @results r join ( select deadlockcount, logtext, convert(varchar(250),REVERSE(SUBSTRING(REVERSE(logtext),0,CHARINDEX('=', REVERSE(logtext))))) victim from @results where logtext like ' deadlock victim=%' ) v on r.deadlockcount=v.deadlockcount left join ( select id, logtext, substring(logtext, charindex('=', logtext)+1,50) processidstart, substring(substring(logtext, charindex('=', logtext)+1,50),0, charindex(' ', substring(logtext, charindex('=', logtext)+1,50))) processid from @results where logtext like ' process id=%' ) p on r.id=p.id join @executionstack e on r.id=e.id where v.victim=p.processid ) q on d.executioncount=q.executioncount where d.logtext not like ' process id=%' and d.logtext <> ' executionStack' and d.logtext not like ' frame%' group by logtext --rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(logtext,' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '))) order by id asc, deadlockcount asc, executioncount asc
This query will return the execution stack and InputBuffer details for each victim.
select d.deadlockcount, d.logdate, d.processinfo, logtext --rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(d.logtext,' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '))) as logtext_cleaned from @executionstack d right join ( select e.executioncount from @results r join ( select deadlockcount, logtext, convert(varchar(250),REVERSE(SUBSTRING(REVERSE(logtext),0,CHARINDEX('=', REVERSE(logtext))))) victim from @results where logtext like ' deadlock victim=%' ) v on r.deadlockcount=v.deadlockcount left join ( select id, logtext, substring(logtext, charindex('=', logtext)+1,50) processidstart, substring(substring(logtext, charindex('=', logtext)+1,50),0, charindex(' ', substring(logtext, charindex('=', logtext)+1,50))) processid from @results where logtext like ' process id=%' ) p on r.id=p.id join @executionstack e on r.id=e.id where v.victim=p.processid --order by r.id ) q on d.executioncount=q.executioncount where d.logtext not like ' process id=%' and d.logtext <> ' executionStack' order by d.id asc
The script, which can be downloaded here, includes all of these queries for you to use. Each one is independent, so if you are only interested in the results for a single query, the other sections can be commented out.
Any feedback you have is always appreciated. In my opinion, that is one
of the best parts about writing T-SQL! Don’t forget to check back for my next post in which I will be using the AdventureWorks2008R2 database to provide an in-depth deadlock analysis.
The post Identifying Deadlocks Using the SQL Server Error Log appeared first on Remote DBA Experts.
To further reduce storage costs, organizations are storing their information in public cloud databases.
Consistent development in cloud technology has made accessing data across a network easier than computer scientists of 20 years ago could have ever predicted. Due to this popularity, database administration services have trained themselves how to issue SQL Server queries across Microsoft Azure, and other cloud environments.
Big data, services models evolving
TechTarget contributor John Moore noted that Database-as-a-Service (DBaaS) is becoming less about just providing storage and more about man
aging, optimizing and conducting performance diagnostics. Simply funneling data into a remote platform often causes disorganization – making it more difficult to find pertinent information and analyze it.
Moore referenced a statistic produced by MarketsandMarkets, which predicts the cloud database and DBaaS market will grow at a compound annual growth rate of 67.3 percent over the next five years, reaching $14.05 billion by 2019. Outsourcing maintenance and support for cloud data stores reduces overhead and ensures database security remains intact.
What knowledge is needed?
In regard to hiring a separate company to manage cloud servers, it's important to acknowledge the types of information organizations are aiming to learn from. Most of the data is unstructured, which can only be accessed through Hadoop storage and NoSQL databases.
Therefore, remote DBAs who are knowledgeable of both these languages and conducting administration via the cloud are essential. That being said, enterprises shouldn't ignore those with extensive knowledge of traditional programs such as SQL Server.
The advantages of Azure and SQL Server
Because these two programs are both produced by Microsoft, natural compatibility between them is expected. Network World noted that putting SQL data in Azure can save enterprises anywhere between $20,000 to $50,000 in procuring physical data center equipment (servers, bandwidth, storage, etc.)
In order to ensure security, administrators simply need to configure SQL properly. The source acknowledged the following protective functions can be applied to cloud-hosted SQL databases:
- Azure Security provides users with a "Trust" guide, in which Microsoft details how Azure complies with HIPAA, ISAE and several other data security laws.
- Transparent Data Encryption enables DBAs to tokenize the contents within an entire database while providing them with a key only those who initiated the encryption task can use.
- Automatic protection involves Azure privatizing databases by default, meaning users actually have to configure the environment to allow the public or unauthorized patrons to view the information.
Aside from these amenities, employing active database monitoring is the best way for organizations to keep cloud databases protected from malicious figures.
At RDX, we provide a full suite of BI services that includes data integration, SSIS, analysis and mining of data, SSAS, and scheduled and manual reporting of data in a variety of formats for visual representation, SSRS.
Our SSIS services include extracting, transforming and loading data from any source into a common format that you can easily understand and use to make better business decisions.
We support high volumes of data and have automated workflows, we also provide auto-transformations of many kinds, and provide custom coding in C# and VB.net.
Our SSAS services allow you to choose between a multi-dimensional (cube) or tabular OLAP – online analytical processing – model to break down the data we've gathered and transition it into your browser of choice for easy, actionable reporting. Our
SSRS services come in an array of drill-down and drill-through, graphs, charts, and diagrams, so you can make the most of your data, including accessing previously stored reports.
For more details, download our BI whitepaper. We'll see you next time!
The post How RDX’s BI services make a difference: Additional Services Series pt. 3 [VIDEO] appeared first on Remote DBA Experts.
Now more than ever, database administration services are providing their clients with the expertise and software required to support big data endeavors.
They haven't necessarily had much of a choice. Businesses need environments such as Hadoop to store the large amount of unstructured data they strive to collect and analyze to achieve insights regarding customer sentiment, procurement efficiencies and a wealth of other factors.
According to PCWorld, Oracle recently released a software tool capable of querying Hadoop and Not Only Server Query Language environments. The solution is an add-on for the company's Big Data Appliance, a data center rack comprised of its Sun x86 servers programmed to run Cloudera's Hadoop distribution.
In order for businesses to benefit from the simplicity of Big Data SQL, the source noted they must have a 12c Oracle database installed on the company's Exadata database machine. This allows Exadata and the x86 Big Data Appliance configuration to share an interconnect for data exchange.
Assessing a "wider problem"
Oracle Vice President of Product Development Neil Mendelson asserted the solution wasn't created for the purpose of replacing existing SQL languages such as Hive and Impala. Instead, Mendelson maintained that Big Data SQL enables remote DBA experts to query a variety of information stocks while moving a minimal amount of data.
This means organizations don't have to spend the time or network resources required to move large troves of data from one environment to another, because Smart Scan technology is applied to conduct filtering on a local level.
InformationWeek contributor Doug Henschen described Smart Scan as a function that combs through data on the storage tier and identifies what information is applicable to the submitted query. Oracle Product Manager Dan McClary outlin
ed an example of how it could be used:
- A data scientist wants to compare and contrast Twitter data in Hadoop with customer payment information in Oracle Database
- Smart Scan percolates Tweets that don't have translatable comments and eliminates posts without latitude and longitude data
- Oracle Database then receives one percent of the total Twitter information in Hadoop
- A visualization tool identifies location-based profitability based on customer sentiment
In addition, Oracle allows DBA services to leverage authorizations and protocols to ensure security is maintained when Hadoop or NoSQL is accessed. For instance, when a professional is assigned the role of "analyst" he or she has permission to query the big data architectures, while those who lack permission cannot.
The post Can you handle big data? Oracle may have an answer appeared first on Remote DBA Experts.
When shopping for a world-class database administration service, paying attention to what specialists can offer in the way of protection is incredibly important.
For websites storing thousands or even millions of customer logins, constantly monitoring server activity is essential. A recent data breach showed just how vulnerable e-commerce companies, Software-as-a-Service providers and a plethora of other online organizations are.
A staggering number
A Russian criminal organization known as "CyberVor" recently collected 1.2 billion unique user name and password sequences and 500 million email addresses from websites executing lackluster protection techniques, Infosecurity Magazine reported.
Andrey Dulkin, senior director of cyber innovation at CyberArk noted the attack was orchestrated by a botnet – or a collection of machines working to achieve the same end-goal. CyberVor carefully employed multiple infiltration techniques simultaneously in order to harvest login data.
Where do DBAs come into play?
Database active monitoring is essential to protect the information websites hold for their subscribers and patrons. Employing anti-malware is one thing, but being able to perceive actions occurring in real-time is the only way organizations can hope to deter infiltration attempts at their onset.
Although TechTarget was referring to disaster recovery, the same principles of surveillance apply to protecting databases. When website owners look at the service-level agreement, the database support company should be provide the following accommodations:
- Real-time reporting of all sever entries, detailing which users entered an environment, how they're interacting with it and what programs they're using to navigate it.
- Frequent testing that searches for any firewall vulnerabilities, unauthorized programs, SQL orders, etc.
- On-call administrators capable of assessing any questions or concerns a website may have.
Applying basics, then language
Although advanced analytics and tracking cookies can be applied to actively search for and eliminate viruses – like how white blood cells attack pathogens – neglecting to cover standard security practices obviously isn't optimal.
South Florida Business Journal acknowledged one of the techniques CyberVor used was a vulnerability IT professionals have been cognizant of for the past decade – SQL injections. This particular tactic likely involved one of the criminals ordering the SQL database to unveil all of its usernames and passwords.
SQL Server, Microsoft's signature database solution, is quite popular among many websites, so those using this program need to contract DBA organizations with extensive knowledge of the language and best practices.
Finally, remote DBA services must be capable of encrypting login information, as well as the data passwords are protecting. This provides an extra layer of protection in case a cybercriminal manages to unmask a username-password combination.
The post Websites: What to look for in a database security contract appeared first on Remote DBA Experts.
To help our customers make the best possible business decisions, we offer a complete set of Business Intelligence support services for Microsoft’s Integration Services (SSIS), Analysis Services (SSAS), and Reporting Services (SSRS) product sets.
Why Business Intelligence you might ask? It helps you quickly identify business trends, empower your staff to use relevant information, gain insight into customer behavior for upsell opportunities – and more. Plus, the initial low-cost entry point of SSIS, SSAS, and SSRS requires no third party software and comes with everything needed to create a robust BI environment.
Microsoft’s BI product stack’s compatibility with other applications combined with having all your support provided by a single vendor helps to simplify your processes and take maximum advantage of your BI applications.
For more details on the importance of BI and how our product set can enhance your business, click on the link below to download our BI whitepaper.
In our next video, we’ll dive deeper into our specific BI products. See you there!
- buy cialis
- cialis online
- generic viagra
- online pharmacy
- cheap adobe photoshop
- buy adobe photoshop
- cash payday loans
The post Advantages & Benefits of BI in your business: Additional Services Series pt. 2 [VIDEO] appeared first on Remote DBA Experts.
When considering outsourcing to a remote database support service, look for professionals capable of automating key SQL Server tasks.
Administrators have a variety of tasks they need to perform on a daily basis. In a world rife with IT security woes, it's imperative for DBAs to be able to dedicate enough time and resources to deterring sophisticated cyberattacks. Ordering rudimentary management jobs to occur without manual attention can help them succeed in this endeavor.
Using SQL Server Agent
According to MSDN, thorough knowledge of SQL Server Agent enables professionals to automatically execute jobs on a predefined schedule, respond to particular occurrences and execute backups whenever they're needed. The source outlined a number of additional benefits DBAs often capitalize on:
- Multiple tasks can be executed at the same time
- Jobs can be ordered to initiate when a central processing unit isn't conducting any operations
- SQL Server performance orders can be automatically recorded
- Cookies can be inserted that monitor the actions of authorized operators, as well as intruders
- Logs of all security-related activities performed by administrators can be created
For more complex jobs, managers need to be able to receive notifications pertaining to mission-critical instances. For example, whenever hardware fails (due to a storm, or some other disruption), organizations need to be able to offload databases to other equipment or deploy backup strategies. For this reason, manually initiating alerts isn't conducive to business continuity.
What can they do?
Remote DBA experts recognize the benefits of automation, but where does the magic come from? Database Journal outlined several notifications these professionals can utilize through SQL Server Agent. For instance, a job's "lifecycle" can be reported as a success, failure or completion.
While receiving alerts applicable to these three tasks is possible, it's not necessarily recommended. Deploying a task and receiving a failure is more constructive, because DBAs' inboxes won't get cluttered with a barrage of emails.
Yet, there are some jobs administrators may want to follow from execution to completion. In order to figure out which directions are set to employ notifications, DBAs can review the notify_level_email value in the sysjobs table in the SQL Server database.
If the value has a zero next to the WHERE category, personnel know that no alerts have been set up for a specific job. On the other hand, if a 1, 2 or 3 is displayed, then the notification will be sent to an email, pager or NET SEND, respectively.
Essentially, automation enables experts to optimize database active monitoring thoroughness and speed.
- buy cialis
- cialis online
- generic viagra
- online pharmacy
- cheap adobe photoshop
- buy adobe photoshop
- cash payday loans
The post Why automation is a key component of database administration appeared first on Remote DBA Experts.
Can Microsoft SQL Server and Informix DB2 environments integrate together? The answer is YES!! I have received an increasing amount of questions concerning wanting to cross platform ETL development work between the two. Driven from these questions, I want to dig deeper into regards to manipulating data between Microsoft SQL Server and Informix DB2.
Recently, I have been asked to load data to Informix DB2 using SSIS which is the focus of my topic. When I was tasked with this request, I did some research and started to develop a solution. However, I ran into some common issues that had unanswered questions in regards to writing via Informix ODBC with SSIS out on the internet. Unfortunately, to this day, I have not seen an actual step- by- step blog about this topic based on my own personal searches. With that being said, I decided to blog about it myself.
Let’s start with the basic information first. What do you need to successfully use Informix with your SQL Server environment?
You should know, at minimum, the following:
- What versions of the driver you have
- What version of SQL Server is installed on your server
- What the version of your operating system is
The version of the driver can cause unforeseen issues when trying to load into Informix via SSIS. Check how your ODBC driver is registered. You can do this by simply checking both 32 bit and 64 bit ODBC Data Source Administrator. Here are the commands for 32 bit and 64 bit respectively:
32 Bit: C:\Windows\SysWOW64\odbcad32.exe
64 Bit: C:\Windows\system32\odbcad32.exe
As you can see, I do have both registered in my current environment:
This is a common issue I have seen between the two. No matter if you have SQL Server 32 Bit or 64 Bit, BIDS is a 32 bit platform application, and the runtime of BIDS needs to be set to reflect this. This is done in the solution properties.
In the properties, you click the debugging option and set Run64BitRunTime from True to False.
Now, you are ready to set up your connections and build your package. In your connection manager, select where your source data is coming from. For my example, it’s going to be SQL Server, so I need an OLE DB connection. The destination I will use is an ADO.NET connection manager.
Here is the little piece that took a while to figure out. Your connection string within your ADO.NET connection manager needs to have “delimident=y” as an argument within the connection string.
Now, my connection string reads as follows:
Notice that I do not have my UID or password passed in through the connection string because they are already stored on my server when I set them up in my Data Source ODBC Administrator.
From here, I am going to simply set up my Dataflow with a source and destination using the connection managers that I have created and map all of my columns.
That’s it! Now, all you have to do is run it and test it.
I have just written 27 records to Informix DB2 via SSIS using the Informix ODBC driver provided by IBM! Extracting, Transforming, and Loading data (ETL) sometimes requires outside drivers and connection managers which require us to learn new thing, and we are learning new things every day in the development world. I hope that you found my blog informative and that it helps others reduce the search for writing to Informix via SSIS. Stay tuned for my next blog post in the next few weeks.
- buy cialis
- cialis online
- generic viagra
- online pharmacy
- cheap adobe photoshop
- buy adobe photoshop
- cash payday loans
Today we’re kicking off a series about our additional offerings, because we think it’s important for your organization to leverage RDX’s full suite of data infrastructure services to improve your organization’s ability to turn raw information into actionable business knowledge.
From our Business Intelligence services – designed to get you the right information about your company to make savvy strategic decisions – to our application hosting, database security and non-database server monitoring, GoldenGate replication services, and support for Windows, MySQL and Oracle EBS, we’ve got every administration need you can think of covered.
We’ll take an in-depth look at each of these services in videos to come, so you can learn how they can benefit your business and choose the services that may be the most important to you.
For more information on our additional services, follow the link below for our Additional Services Whitepaper.
Tune in next time as we discuss the importance of Business Intelligence for your business!
The post A look at how RDX’s Additional Services can meet your needs: Series Kick-off [VIDEO] appeared first on Remote DBA Experts.
Although the transition from paper to electronic health records hasn't been easy, it's certainly paid off.
Those in the medical industry can now access patient information more easily, allowing them to eliminate mistakes characterized by the use of tangible forms. However, organizations should be wary of the dangers EHR implementations pose to database security.
Eliminating grievous mistakes
That's not to say professionals should abandon EHR technology. The National Institute For Health Care Reform acknowledged how using EHR can eliminate what physicians, hospital administrators and others in the health care sector call "unintended discrepancies." These instances are essentially minor mishaps that can have major repercussions.
Unfortunately, fragmented delivery systems will provide inaccurate information regarding medications, especially when patients are being admitted to and released from hospitals. This can cause doctors to accidentally omit, duplicate or add unnecessary prescriptions. In a worst-case scenario, this could cause a person to overdose.
The NIHCR outlined what facilities need to prevent these mistakes from occurring, and it starts with the implementation of an EHR system. Such technology can allow hospitals and personnel to:
- Aggregate accurate, applicable pre-admission medication data
- Compare hospital prescription orders to previous medications so physicians can make educated treatment decisions
- Share relevant lists pertaining to medicines administered for the discharge phase with primary care doctors, nursing facilities and other places
Obviously, a lot of digital information is being stored and transferred. Some connections may be more secure than others, but the environment is a hacker's dream come true. HealthITSecurity contributor Greg Michaels acknowledged that while exchanging patient intelligence may enable physicians to deliver better care, health care organizations find they can't dedicate enough resources to sanctioning safe delivery.
Michaels advised medical industry participants heavily entrenched in EHR uses to work with a trusted, third-party IT security expert. In addition to communication surveillance, the outsourced entity should be capable of providing remote database management and monitoring as well. Michaels also recommended professionals abide by the following best practices:
- Audit all partners to see which ones provide their customers with protected health information and identify which IT protection measures they're taking
- Open communication with third-parties so data breaches affecting multiple organizations can be addressed in a united manner
- Ensure all partners are compliant with standards outlined by the Health Insurance Portability and Accountability Act
- Educate in-house personnel on how to take basic security measures
By seeking help from a database administration service and implementing basic protective measures, hospitals will be able to use EHR with limited risk of sustaining an IT disaster.
The post Don’t let database security woes outweigh EHR benefits appeared first on Remote DBA Experts.
The Internet of Things has been a hot topic of conversation among IT professionals as of late.
Promises of more unique insight into customer behavior have tempted consumer-focused companies to invest in the technology. Manufacturers are looking to implement intelligent devices to achieve higher levels of productivity. However, a number of organizations are ignoring the impact IoT will have database security.
A hacker's haven
It turns out cybercriminals are just as interested in IoT as multi-billion dollar corporations are. ZDNet noted a study conducted by Hewlett-Packard's Fortify division, which scanned 10 of the most prevalent Internet-connected devices, discovering about 25 faults per implementation. The source acknowledged some of the most telling discoveries:
- 90 percent of the devices assessed contained at least one piece of personal information pertaining to an individual
- Weak credentials and persistent cross-site scripting plagued six out of 10 mechanisms
- 80 percent of implementations failed to allow users to employ intricate, lengthy passwords
- 70 percent of devices didn't protect communications with encryption, while 60 percent of such machines lacked the programs necessary to launch encoding tasks
Essentially, it wouldn't be too difficult for even a fledgling hacker to gain access to a company's IoT assets, establish a network connection with its databases and steal information from the business. Database active monitoring can deter such attempts, but a wide distribution of Internet-connected property can make such a task difficult for in-house IT departments to perform.
Where's the issue?
Consumer-focused IoT devices are particularly vulnerable to sustaining damaging cyberattacks because they're so ubiquitous. Yet again, it's important to ask why IoT implementations are so defensively weak in the first place.
Re/code contributor Arik Hesseldahl identified two factors as the culprits of IoT instability:
- Manufacturers are rushing to get these products to market without giving enough attention to security features.
- The majority of these devices run the Linux operating system, which is already prone to a number of defensive shortcomings.
One of the only ways to guarantee hackers aren't infiltrating these assets is by protecting company databases from malware that may be attempting to enter servers through the mechanisms. Why is this backend surveillance necessary? Because the devices themselves don't have the same protective software PCs, tablets and even many smartphones possess.
The scale of the problem? Hesseldahl referenced statistics from Gartner, which discovered 26 billion individual devices are going to be online by 2020. Essentially, there's a massive pool of property cybercriminals could exploit in order to steal financial information.
As a remote data infrastructure services provider, documenting the activities we perform, as well as our customers’ environments, is critical to our success. RDX currently supports thousands (and thousands) of database ecosystems ranging in complexity from simple to “making your hair stand on end.”
My customers hold my organization to extremely high standards. Rightfully so, they have turned over the keys to their most sensitive and mission-critical data stores to RDX. At the end of every email blast I send to our customer base, I end it with, “I personally appreciate you placing your trust in us to administer and safeguard your most valuable data assets. We take that responsibility very seriously here at RDX.” Stating that we take that responsibility seriously is kind of like saying the Titanic sprung a small leak.
Although the importance of a well thought out and detailed documentation library is blatantly obvious, creating documentation is the task most often postponed by an overworked DBA unit.
Documenting processes, procedures and best practices is a task that is often considered to be boring and mundane. Most DBAs would rather perform virtually any other activity than sit in front of a screen using a word processor. As a result, creating documentation is often postponed until the DBA has a little free time to kill. Today’s database administration units are operating with smaller staffs, tighter budgets and ever-increasing workloads. The end result is that the documentation is either never created or created and not kept current.
However, a robust detailed documentation library creates an environment that is less complex, less error-prone, reduces the amount of time DBAs spend learning new database environments and reduces the overall time spent on day-to-day support activities. DBAs are able to spend more time administering the environment rather than finding the objects they are trying to support and the processes and programs used to administer them.
The nature of my business as a remote services provider demands excellent documentation. The majority of environments we administer weren’t designed by my organization. The only way that we can ensure high quality and high-speed administration of these environments is to document them thoroughly. We document everything from initial connectivity and customer contact sheets to detailed information on database and server information, batch job streams and individual program execution (what it does, run-time windows). If we need to be aware of it, we have it documented.
Documentation is also the foundation of many of the other disciplines I will be discussing in future blogs. Let’s continue our discussion with a few helpful hints to get you started.
Understanding the Important Role Good Documentation Plays
We all generally understand the benefits that documentation provides. I think that all readers will see the importance I personally place on documentation in upcoming blogs.
Let me reaffirm my opinion in this one sentence: Good documentation is the foundation that high-quality data infrastructure services are built upon.
Creating an Organizational Environment That Fosters Good Documentation
I’ve been the Vice President of Service Delivery at RDX for 6 years now. It is my responsibility as manager to create an environment that fosters the production of robust and high-quality documentation. Let me describe some of the challenges that I have faced in the past at other organizations and how I have overcome them.
Since I view high quality documentation to be my responsibility as a manager, I ensure that it becomes part of every DBA’s performance appraisal criteria, including my own. If it isn’t on my, and my unit’s, performance appraisal forms, I will ask to have it added or make my own personal addendum and notify both the DBA team and management that I have done so.
I will add time for documentation when I estimate the amount of time it will take me to perform an administrative task during project planning meetings. I don’t settle for “we can do that after the project is complete” as an answer.
If you continuously sell the importance of documentation, sooner or later, you will begin to wear your opponents down. Although I prefer to call it “being relentless,” I’m sure that many of the application development managers (and my own managers) viewed it as “being a ….” (insert your favorite description here).
Every document I have created that provides a list of activities I , or my unit, need to perform during a project has documentation included. It helps to integrate it into the fabric and culture of my organization’s environment.
Making Documentation Second Nature
You also need to ensure that generating documentation becomes a natural part of your daily activities. You must continuously remind yourself that documentation is a primary and integral part of providing high-quality support services to your customers.
You must also remind yourself that it makes your job easier and benefits your fellow DBAs. It is a recipe for disaster when a fellow DBA needs to be out of the office for a time and asks another DBA to “help them out” by performing a complex, application-specific administrative activity and then tries to verbally tell them how to perform the 326 steps it takes to execute it.
Did you ever try to refresh an ERP application test environment from production when that test environment doesn’t have enough space to hold all of production’s data? 4,000 steps later, you begin to second-guess your choice of professions. That was the exact request from one of my fellow DBAs when I first started in this profession, and it quickly taught me the importance of good documentation. Not only did he get me to do the refresh, but I also had to document the process for him along the way. Some call that being a good coworker; I would view that as having a big sucker taped to my forehead.
The moral of this story is this: If you don’t want to be the only one that can perform that 900 step ERP application production to test refresh, document it! If you don’t want to be called by the on-call DBA because he doesn’t know exactly where to add a file in an emergency situation (like someone forgetting to tell you that they were loading 10 million additional rows into that 100 row table), document it! The more you document, the easier your life as a DBA becomes.
I’ve never had a photographic memory. It makes generating documentation easy for me. I also like to write, and that helps, but I will admit that there are times that I would rather perform virtually any other activity than document.
However, it has become easier because I continuously reaffirm to myself the importance of documentation. The more you reinforce that to yourself, the more second nature (and easier) it becomes.
Making Documentation Easy
I’m a huge fan of documentation templates. Here at RDX, we have templates and Standard Operating Procedures for everything we document. If it is repeatable or a complex process, we have an SOP for it. We have templates for documenting connections to our customers’ environments, their backup and recovery environments and their application specific processes, to name a few. If it needs to be documented on a regular basis, we have a template for it. We also have generic templates for documenting environments and activities that don’t fit into other templates.
Word Documents and Templates
Word document templates provide many features that streamline the documentation process and help to improve the quality of the content they store. I try to take advantage of as many features as I can. I use drop-down selection menus, check boxes and radio push buttons to improve the speed and quality of the documentation process. I also take advantage of the help pop-up feature that Microsoft Word provides to create a detailed description of what information is to be entered into that field, check box or radio button.
We heavily utilize Wikis to intelligently and securely display information about the environments we are tasked with supporting. A common, menu-driven interface has been designed, tuned and tweaked over our 20 year history. The Wiki’s contents include customer contact and escalation information, detailed database/server information, customer change management procedures, RDX DBAs assigned to the account, on-call team assigned, non-sensitive connection information (VPN type, VPn vendor, etc) and job information. The Wiki page also links to current tickets, current time cards and a history of past problems contained in our problem resolution library.
The Wiki content is controlled by a well-defined change management procedure and relies upon Wiki templates to ensure information is stored and displayed in a common format that adheres to RDX specifications. Once again, templates help improve the quality of content, speed data entry and ensure a uniformity of display pages and menus. We constantly review the Wiki for content and usability as well as leverage new Wiki features as they are released.
Database-Driven Content Managers
There are dozens of software companies that offer content management solutions. Database vendors have also recognized this as a lucrative market. All of the major database vendors now offer advanced content management software, each one trying to outdo the other in the number of bells and whistles that their products offer. Do a quick search on Google for documentation content management software, and you will find out just how many competing products there are.
Content management products offer check-in/check-out features, document versioning, web portal access and advanced workflow capabilities to name just a few of the features designed to improve content management. The competition in the content management market space is fierce to say the least. Content management vendors know that continuously adding new bells and whistles to their products is not just important for increasing market share, but it also is critical for their survival. Product costs can range from thousands to tens of thousands of dollars (or more).
If you have the funds and your management understands the benefits that a full-blown content management package provides, by all means begin a content management product analysis. But if you don’t have the funds, create a shared drive on your network and declare it to be the “DBA Documentation Portal.”
What to Document
By all means, this is not an all-inclusive list of what can be documented. Consider it as a starter kit to help you begin your quest for “documentis nirvanas.” Is some of this overkill for your particular environment? Maybe, but just consider this a general, high-level list. Since most readers will work for a single organization, I’m focusing my recommendations on DBA units that support one corporate environment.
Database Environment Documentation
- Naming conventions
- Servers (server names, operating system release, hardware vendor)
- Databases (vendor, database version, features enabled)
- Application type (i.e. data warehouse, online transaction processing, decision support, third-party application name and functionality it provides).
- Business unit requirements and related information for supported databases
- Uptime requirements (i.e. 24 X 7, 8 X 5)
- Database downtime windows
- Critical job processes
- Business unit and application developer contact lists
- Turnover windows for database changes
- Problem notification and escalation procedures
- Security sensitivity- How sensitive is the data?
- Repeatable administrative processes (covered in an upcoming blog)
- Backups – Probably the most critical set of documentation you will ever create- Document how it is backed up, what scripts back it up, where the backup is going to, retention periods and backup message directories. If it is involved with a backup, DOCUMENT IT. Review the document with other units that are involved in the backup and recovery process. It is your responsibility to ensure that you don’t hear an operator say, “What retention period? Nobody told me we were to have a retention on these files” when you are in a recovery situation. Remember that Oracle states that human error, including miscommunications, is responsible for over 90% of failed recoveries. If you want to reduce recovery failures, DOCUMENT THE PROCESS AND REVIEW IT.
- Anything else you run on a regular basis to support a specific application
- Change management- I’ll be spending an entire blog, or two, on this
- A daily monitoring activity checklist to ensure that no activity is missed- We have daily, weekly and monthly activities that are to be performed for each of our customers
- Complex administrative activities performed regularly
- Test and reporting database refreshes
- Data reorganizations
- Disaster recovery tests- The processes required to perform the recovery AND the criteria that will be used to evaluate whether it was successful or not
- DBA-specific stored PL/SQL and TSQL programs
- Day-to-day support scripts (where they are and what they do)
- Monitoring scripts (where they are and what they do)
- Scripts used to perform database administrative changes- I personally utilized specific directories that provide output from critical database changes that I have performed and other directories containing the SQL used to make that change
- Operating system scripts- Document what the script does in the beginning of each of your scripts. Did you ever try to determine what a 400 line script does that was created by someone who knows much more about UNIX scripting than you do? We have all been in that position at one time or another during our career. Make it easy on your coworkers to understand what the script does by putting comments at the top of the script as well as in the body. Also keep a running history of script changes, what they were and the time they were made
Database Administration Unit Organizational Documentation
- Contact Information
- DBA roles and responsibilities- Which applications, databases and tasks they are responsible for supporting
- DBA unavailability- Allows application developers to plan for a DBA not being available
It is a good practice to distribute this information to all business units supported by the database administration unit.
I hope you enjoyed this blog on documentation and the important role it plays.
Effectively being able to store and manage big data is more than simply having a lot of hard disk space.
The variety and complexity of the information produced by Internet-connected assets has forced database administration services to adapt to new processes and environments. Their focus on accessibility and security hasn't wavered, but the manner in which they approach these priorities has transformed.
Solving the puzzle: structured and unstructured data
BrightPlanet, a company that specializes in harvesting data from the Internet, outlined the differences between unstructured and structured data. While volume has always challenged databases to hold massive troves of organized intelligence, one of the chief difficulties resides in the act of actually arranging it.
- Structured: Pertains to data that is highly constructed and easy to query and is typically held in relational database systems. A spreadsheet is an example of structured information.
- Unstructured: Applicable to data that doesn't subscribe to a particular architecture and is usually stored in NoSQL databases, which run complex algorithms to create environments capable of managing it. Social media posts are examples of unstructured information.
What does this mean for remote database services?
According to MongoDB, conventional DBA practices needed to become more agile in order to be able to query large collections of unstructured data, giving birth to NoSQL databases. This access language sanctioned the development of "document" storage, which has spawned the following benefits:
- Documents are regarded as independent entities, which makes it simpler to transport data across multiple virtual locations.
- SQL queries don't need to be translated from object to application.
- Because a document contains whatever values the software language requires, unstructured data is easy to store.
In response to this development, DBAs learned the administrative languages and tools needed to launch and manage document-based data environments.
Different program, same responsibilities
As one can imagine, DBAs are still expected to perform the same database active monitoring tasks they have been around since the inception of digital information storage. There are also a number of additional responsibilities these professionals are undertaking:
- Understanding how clients plan on using the data. Are they simply looking to scrutinize it or allow applications to make intelligent decisions with it?
- Securing firewall access. What tactics are cybercriminals employing in an attempt to penetrate these environments?
- Managing and monitoring performance. How well are software deployments adapting to unstructured data?
Outsourcing to DBAs knowledgeable in contemporary enterprise needs and NoSQL databases may be a good tactic for organizations to use.
As hospital personnel continue to access patient records through mobile devices, health care organizations are taking new approaches to database security.
Assessing initial requirements
The best way for CIOs in the medical industry to measure the performance of their server protection strategies is to ensure all software deployments are compliant with the Health Insurance Portability and Accountability Act. Information Week contributor Jason Wang acknowledged the basic requirements HIPAA obligates mobile applications and networks to possess:
- Authorized, defended user access to protected health information
- Encryption features that hide sensitive data from unsanctioned personnel
- Routine security updates to eliminate bugs or loopholes in the network
- A remote access data elimination feature that can be activated by administrators in the event a mobile device is lost, stolen or compromised
- A solid business continuity/disaster recovery framework that can be tested on a regular basis
With these points in mind, health care organizations would greatly benefit from having a third party develop an enterprise-wide mobile application for their facilities. Salesforce CRM in particular is a solid option for those looking to install such an implementation, primarily due to its reputation for having HIPAA-compliant security features.
The risks involved
Many medical professionals believe employing a mobile network will help their subordinates allot more attention to patients. While this concept may be true, there are a number of threats that left unacknowledged could infect such a system. Having a third-party company constantly conduct database active monitoring tasks is imperative to deterring the following dangers:
- Mobile devices, as well as wearables, are easily misplaced, meaning that those who come across these mechanisms could access private patient information
- As a number of health care providers are communicating with patients through social media – malware and other Web-based attacks could be funneled through such mediums to infect devices.
- Because mobile keyboards are rudimentary, users are more likely to use uncomplicated passwords that can easily be unmasked.
Be a smart user
Database administration needs aside, health care companies must also provide personnel with a secure line of communication. HIT Consultant noted that text messaging is a solid way for hospital staff to transfer information quickly and on the go, but the avenue lacks the encryption technology necessary to keep these communications secure.
Installing an encoding program geared specifically toward mobile text messaging is a good move to make. However, employees should also be cognizant of the fact that they should not explicitly share vital information, if they can help it.