Friday, November 21, 2008

10g Migration Ramification Part 2: Making the database a safer place

In Part 2, I will be discussing a couple of the security changes Oracle has made in 10g Release 2 that have affected my life as a DBA. (I can't speak to 10g Release 1 as I pretty much skipped this release altogether). As a whole, these changes point to Oracle's continuing focus on database security, although it could be argued they have a long way to go.

Changes to the DEFAULT profile:
Oracle has made a change to the out-of-the-box setting of FAILED_LOGIN_ATTEMPTS of the DEFAULT profile. In 9i the default setting was UNLIMITED, however, in 10gR2, the profile has been updated to have a value of 10. What the means is that if a user has 10 invalid attempts without a valid loginlogin, the account will be set to the LOCKED(TIMED) status. Since the DEFAULT profile has PASSWORD_LOCK_TIME set to UNLIMITED, the account will never unlock without manual intervention. This makes an out-of-the box 10gR2 database more secure, in theory, but there is one side-effect. If you do not change the setting or create custom profiles with appropriate settings for application accounts, it is quite simple to engage in Denial of Service attacks against the database.

I've seen this change in security cause production issues mainly with third-party applications that don't have the ability to change a user's profile or with internal processes that are used to reset passwords for users. This has caused increased help desk tickets due to locked accounts even after a password reset was completed.

Thankfully, this is pretty easy to solve depending on your business rules.
  1. Alter the default profile to match your company standards. I'm not the biggest fan of this. I"m not sure if Oracle keeps any changes you make to the DEFAULT profile between upgrades. I'll have to test that sometime when moving to 11g.
  2. Create new profiles with your own security settings. To get around certain third-party applications that create their own users, you may need a system trigger to change the profile after a user is created. You may also need to include automatic 'unlock' functionality in the case of a user that has their password changed and is in a LOCKED(TIMED) status. This could alleviate help desk calls.
  3. To deal with the unlock issue, you could also change business processes that reset passwords to also include an unlock of the account. There are a lot of options when it comes to Oracle Security and profiles. Explore all of them.
Changes to default file permissions to the Oracle binaries:
The Oracle 10g software installation on most Unix/Linux platforms has become more secure out of the box. In fact, Oracle has taken it to the opposite extreme. Prior to, the Oracle software installation was pretty wide open from a file permission standpoint. Starting somewhere around, the ORACLE_HOME has been locked down to only allow access to the owner and group. From a security standpoint this looks to be a good thing, however this can cause slight headaches for applications and developers that use the Oracle client on these platforms. Thankfully, in Oracle 10gR2 there is a script to loosen security on the ORACLE_HOME. changePerm.ksh is located in $ORACLE_HOME/install on most Unix/Linux platforms. You can run it as-is and it will more or less put the permissions back to the way they were in previous versions. There is also the ability to customize what permissions are loosened.

Database Link passwords no longer stored plain text:
This is a wonderful change from previous versions. For far too long, Oracle has been storing database link passwords in the SYS.LINK$ (and exposed via USER_DB_LINKS) plain text. While this has always been a nicety for DBAs to use, it definitely exposes a security hole in the database. In Oracle 10g, the passwords are now stored encrypted (or maybe it is hashed, I forget) so that DBAs and developers alike can not gain access to schema accounts. Even the USER_DB_LINKS view shows no password when logged in and looking at private database links.

What this means to the DBA is a bit of a change in how they move database links. An old trick was to run the following:

SELECT name,password,host FROM$;

You will also need the user_id to join to user$. The DBA could easily get passwords and recreate database links. Luckily, in 10g, the DBA doesn't need to know passwords. The old trick of using the password hash works just fine for database links as well.

'SELECT name,xpassword,host FROM$

If you take the output and format a database link create statement like the following, you will have a working database link:


This assumes you are logged on as the owner of the database link.

There are other methods in recreating the database links in 10g. Refer to the Oracle-L thread that occurred recently for more information

I know there are a lot more security changes in Oracle 10g. These are the big ones that I've encountered in migrating from 9i to 10g. Hopefully this will help any small hiccups in your upgrades.

Friday, October 31, 2008

Don't Cramp My (SQL) Style ... Musings from an Oracle DBA

This post was inspired by a Twitter thread that took place on October 31st between myself, @oraclenerd, @dtseiler (maybe I only included Don for his tongue in cheek, playground banter), and @surfsearcher :
@oraclenerd: "besides commas at the start of a line (sql, plsql), I also hate mixed case sql and plsql"

@oraclenerd: "@piontekdd come on! it's just plain ugly...besides, I've gotta have have at least a little something to bitch about now and again. :) "

@piontekdd: "@oraclenerd life is too short to get upset about personal style :)"

@piontekdd: "@oraclenerd I like mixed case and see the validity in commas at the beginning ;)"

@oraclenerd: "@piontekdd well sir, I would argue that if you like commas at the beginning and the brewers, well... "

@dtseiler: "@oraclenerd I like commas at the beginning and the Brewers. Do we need to take this outside?"

@piontekdd: "@oraclenerd putting commas at the beginning (while not my natural style) is easier to cut and paste lines and to not miss commas.:

@surfsearcher: "@piontekdd the last dba i worked with corrected me on doing it that way, made me change all my sql b/c commas in front is "non-standard"."

@surfsearcher: "@piontekdd this dba was known for enforcing her personal standards to a point well beyond reason. i'm with you on the commas."

I believe the gist of what Chet was getting at was a personal style issue in formatting SQL or PL/SQL code:

foo = 'BAR' and
bar = 'FOO'
order by


foo = 'BAR'
AND bar = 'FOO

While I'm sure Chet was just venting over some work issues, it got me to thinking about standards in the IT workplace versus personal style. (Personal baseball tastes and my obvious over use of emoticons on twitter notwithstanding). I'm all for having coding standards, database object standards ... etc for in-house development. However, I do find that coding, whether it be .NET, Java, SQL or PL/SQL, has a bit of personal style to it. This does not mean making code difficult to read or impossible to follow, but I can see the need for some personal latitude when coding. I've experienced my own coding creativity slowed down by worrying too much about the standard and not enough on what I was attempting to solve. Is is not enough to provide excellent documentation and comments in your code?

I have worked as a developer on large systems (Oracle and C), as well as applications where I am the lone developer (Visual C++). On large system, it is important to adhere to some sort of standard to make the code easier to support by others. I've seen the proof that following these standards can keep maintenance costs down. I just don't see how forcing a developer to put a comma in a certain place, or using a certain case of letters, is going to hurt that.

I've been an Oracle DBA for over eight years. I've written my share of SQL and PL/SQL packages. I totally understand the need for certain standards. When designing data models for new applications, it is very beneficial to have object naming standards (tables, indexes, constraints, etc). It is important for developers to have a set of SQL guidelines. (e.g. Using bind variables where appropriate).

When I read @surfsearcher's tweet above, I started to get a bit passionate about writing this. I'm a DBA. At the end of the day, I'm ultimately responsible for what gets promoted to production and for the health of the database. However, my personal preference for where a comma goes shouldn't enter into the equation. It is my job as the DBA to work with the developers, not push my personal style down their throats. It is my belief that DBAs and developers need to work hand in hand in a collaborative way. If I ask someone to change something, I better have a better reason than "It is not standard".

I often see developers and DBAs engage in this 'us vs. them' relationship. I see this in online forums and lists. DBAs referring to developers as 'duhvelopers'. I've seen it in the workplace. DBAs are viewed as over-bearing, control freaks or where the DBA thinks their life would be easier without having developers. It does very little for making a better application. I've seen my share of 'duh' moments in the development life-cycle. I've seen my share of 'duh' moments by DBAs (myself included). I firmly believe a cooperative, open environment must exist for a successful application to be designed, developed, and deployed.

What do you think? Is there room for personal style when coding in the workplace? Can strict naming standards co-exist with personal style/preference? Is all this cramping your style?

Sunday, July 27, 2008

10g Migration Ramification Part 1: ORA-1426, Numeric Overflow

Oracle 11g was released over a year ago, but lets face it, not everyone is running Oracle 10g, let alone 11g. July 31st marks the end of the 'free' Premier Support Oracle offered up to all Oracle customers last summer. With that spirit in mind, I thought I would share some of my Oracle 10g Migration Ramifications. ('Upgrade' is a boring word and doesn't have a lot of words that rhyme with it. It is also a great homage to Adam Sandler's 'Cajun Man').

In part 1, I will discuss an ORA-1426 you may encounter in PL/SQL code if you are using large literal integers and expecting a NUMBER type to hold it.

The code:

lv_results NUMBER := 0;
lv_results := 999999999 * 3;

Oracle 9i Results:

PL/SQL procedure successfully completed.

Oracle 10g Results:

ERROR at line 1:
ORA-01426: numeric overflow
ORA-06512: at line 4

I believe the 10g result is related to a change made in Oracle 10.1. There are various PL/SQL optimizations and new features added in Oracle 10g Release 1 and Release 2. Check out Metalink DocID: #311971.1 "New Features for Oracle10g PL/SQL 10.x" for a good list. I believe what we are seeing in the code above is an optimization made to the PL/SQL engine. In the example above, the PL/SQL engine is attemping to use an integer math engine to perform its calculations. This makes sense the math involves two integers. It would be faster to use integer math over floating-point.

If we change the code slightly, we can force floating-point math and avoid the numeric overflow.

New Oracle 10g Code:

lv_results NUMBER := 0;
lv_results := 999999999 * 3.0;

New Oracle 10g Results:

PL/SQL procedure successfully completed.

You could also avoid the issue altogether by using variables of type NUMBER.

The moral of the story with any database migration (or application change) is testing, testing, testing. Even the smoothest, most tested migration is going to come up with a few small issues (or one really big one). Even when you are armed with a list of all the new features, possible bugs etc, there are bound to be surprises for developers and dbas alike.

Good luck with getting your 9i databases migrated (or 8i , or even, 8 and 7)

Thursday, July 24, 2008

ASP State and SQL Server: The Mirror Has Three Faces

In the spirit of Jonathan Lewis' recent post, Sometimes, Everything I Write Sucks!, comes today's installment of the Prestidigitation of Oracle. Today's prestidigitation is that I'm not going to write about Oracle (or why I need to blog more), but I'm going to focus on a topic I've been dealing with lately which is implementing the ASPState database with SQL Mirroring.

This is part one in the series and will go over the set up of a custom ASP session state database to be used with ASP.NET applications which is connected to a SQL Server database that is set up in a SQL Mirroring set up. This article isn't going to cover the actual SQL Mirroring setup. A lot of what I did was learned from an article on Storing Session State in a SQL Server Database.

The ASP session state database has three options you can install with. A non-persistent database that utilizes tempdb, A persistent database that uses a default database called ASPState, and a persistent database that has a name of your choosing. (i.e. Custom database). Hence, the mirror has three faces. Given the premise that we have a SQL Server 2005 database set up for SQL Mirroring and that we want to keep the session state persistent between restarts as well capable of mirroring. The only choice I see for this is to store the ASP session state information in the applications database. In other words, using the custom option. This keeps the session information encapsulated in the same database as the application itself, and allows the information to be mirrored to another SQL Server database.

Installing the ASP session state database is fairly straight-forward. The first thing you'll need is the ASPNET_REGSQL.EXE executable. I found this under the .NET Framework 2.0 directory. I my case, this was the C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727 directory.
I prefer Windows authentication, so the installation is fairly straight-forward.

ASPNET_REGSQL.EXE -E -ssadd -sstype c -d -s

This will install a couple tables and a bunch of stored procedures into your database. Voila, you now have a custom ASP session state database that should be ready for SQL Mirroring.

One other step I performed is one of security. Since the ASP session state database is stored procedure based, I need to give the application user the ability to execute stored procedures (i.e. The app will not connect as dbo or sysadmin).

CREATE ROLE db_executor;
GRANT execute TO db_executor;

You now have a role that will assume execute privileges on any stored procedure in the database. Assign the role to your application user and everything should work excellent.

In the next installment, I hope to cover the testing of persistence in a SQL Mirror fail-over scenario. Don't expect it too soon (the prestidigitation continues) and look for more Oracle posts to come soon.

Wednesday, July 23, 2008

IOUG RAC Attack! ... I surrender!!!

The Oracle RAC SIG and IOUG are putting on an event in a few weeks in Chicago, August 4th and 5th. It is called the IOUG RAC Attack! (awesome name, I wonder who came up with it). Since my current job entails supporting an Oracle RAC environment (2 5-node clusters) and we have some newer members who are a bit green in the RAC arena, I thought it would make sense to check into the event and agenda to see if it is worthwhile to send some people to this.

From looking at the agenda and the quality speakers they have lined up, I'm finding it difficult to find reasons not to recommend this event to my colleagues. For $650 (2-day event, with a hands on lab) this looks to be good bang for your buck. I'd love to go myself, but my schedule just won't allow it. I'm especially interested in hearing what Alex Gorbachev has to say in the Oracle Clusterware lecture as well as Jeremy Schneider's Oracle Services lecture. Hopefully one of my colleagues attend and take good notes and share with the group when they return. *hint hint*.

With that, let the registration begin.

On a semi-related note. It might be cool for the Oracle RAC SIG or IOUG (or both) to put together online badges for its members. It would promote both organizations in the Social Networking world and blogosphere (If they already have them, my most humblest of apologies)

Disclaimer: I'm in no way affilliated with the Oracle RAC SIG or IOUG (aside from being a member of both).

Tuesday, July 22, 2008

9 Reasons I Need to Blog More

Almost on a daily basis, I sit and stair at the Prestidigitation of Oracle and wonder what I should blog about today. I know I have a lot of ideas floating around in my head and all it takes is that mental kick in the arse to get it out. And yet, here I am sitting on July 22nd, 2008 with only 3 blog posts since starting in February (That is a) not a good average and b) 2 posts, as the first one was an inaugural post). This is my attempt to kick my brain in the arse and do what I set out to do this year. Seth Godin blogged the other day about 12 ways to get your blog to the top of digg. I thought I'd try some of the tactics he mentioned in an earlier blog. I've also been semi-inspired by Fred Wilson's A VC Blog over the course of the year. While he tends to dwell in a world completely foreign to me, it is rarely not an interesting read and can get my creative juices going. Without further ado, and in no particular order:

1. Everybody is doing it
- I just threw that in there for the high school affect. I'm really not a follower (well, I am on twitter). You can definitely argue that I am on the bandwagon a bit late. But, better late than never. In all seriousness, I read a lot of interesting blog articles from Oracle professionals, venture capitalists, Social networking sites etc. I think I have something to offer and give back to the community.

2. More Blogging = Less TV time
- I love TV, I really do. The DVR was the best (and worst) invention to come in the past 5-10 years. Although, now that I think about it, I can sit in front of my laptop with a favorite TV show in the background. Maybe I should find a better hobby :)

3. What goes better with beer than blogging?
- I've seen lists like "Best Beer Drinking Sports" and the like. Well, blogging is a perfect fit for beer drinking. I love beer, I love making it, drinking it, and finding new beer. The thought of sittin back, relaxing with a cold one, and writing the thoughts in my head seems like a perfect blend.

4. Time to increase my Oracle breadth
- I spent my most of my days working on company-related database issues. There doesn't seem to be nearly enough time to do R&D. Keeping an Oracle-related blog is a great way to force me to do my own research and improve my Oracle DBA skills on my own. Now I just need to get this laptop's memory increased so I can get Oracle 11g running on VirtualBox and I'll be in business. I've started keeping a daily list of topics I want to blog about. The next step is to get that done.

5. Aspirations
- I have aspirations to present at Oracle technical conferences. I look at blogging as a great first step to getting to that goal. This will force me to come up with ideas and not only find my voice, but also an audience (I hope).

6. Peer Pressure
- I suppose that sounds an awful lot like reason #1. However, it is a bit different. I look at peers and see the amazing things they are doing on their blogs and the great content they are creating and sharing with the oracle community (Look at Dan Norris, Eddie Awad, Greg Rahn, Don Seiler to name a few) and I say "hey, I can do that, and can hopefully add my own perspective".

- An interesting aside related to Greg Rahn. He probably doesn't realize it, but without Greg Rahn, I probably wouldn't even be an Oracle DBA. In 2000, I had just been hired on as a full-time developer at a local utility. Greg was a DBA there. He resigned later that year , I think to find his fortunes out west during the Dot Com boom. That left an opening on the DBA team. I was approached by one of their Senior DBAs to be a DBA. I thought it was a crazy idea. Those DBAs didn't do anything ;) I was a C developer, I knew it all. Luckily, I decided to take the leap and have been loving it ever since. Thanks Greg for resigning and opening up that door to me ;)

7.Twitter: Sometimes 140 characters just isn't enough
- I love twitter. It is a somewhat emerging social networking idea. Some think blogging is dead and microblogging is the wave of the future and replace blogging. I'm not sure I agree at this point but I do find it a very useful medium to share ideas and expand my horizons. Sometimes (okay, a lot of times) my thoughts take up more than 140 characters. That is where the Prestidigation of Oracle needs to take over.

8. Unload some of those brain cells.
- I'm a big believer in the Use It or Lose it Principle. I think this applies to daily life. Whether it be that new whiz-bang Oracle feature you just read about, the latest training class you are thinking about taking or documenting that new package you just wrote.

9. Comfort Zone
- I think it is real easy to get into a comfort zone. Life can get hectic and it is simple to get into a routine. You need to challenge yourself and shake things up to grow. I look at blogging as a natural way to do this. While I feel I've always been a decent writer, it has generally been with little purpose (essays, papers, etc). I need to find a purpose and hopefully this can get me on the right track.

Monday, May 12, 2008

Tablespace 195% Full ... must be the metric system

I was recently refreshing a test database with production data (using datapump if anyone cares) and had forgotten to check the tablespace sizes between prod and test. During the load, I obviously ran out of space. A quick check in prod showed I needed 4GB of space. The test tablespace was set to grow to 2GB max. To add space, I performed a simple command to add the space.

ALTER DATABASE DATAFILE '/u01/oracle/data01/blah.dbf' RESIZE 8GB;

I resumed the datapump and went on my merry way. A few minutes later I received an email from Grid Control that the tablespace was 195% full. 195%?!?! That is crazy. Being that it was Friday afternoon and I wanted to get out of the office for the week, I chalked it up to some flaky Enterprise Manager metric gone wrong and moved on.

Upon returning on Monday, I noticed the same alert had not cleared. Time to dive in and research the issue on Metalink. I found some good docs on Metalink and found reference to some bugs fixed in but no work-arounds to speak of. What I did find was the source of the metric. It is a view called DBA_TABLESPACE_USAGE_METRICS. I had never heard of this one before (looks to be another 10g-ism, and looks very simple and easy to use compared to past ways of calculating tablespace utilization). A quick query on the test database showed that indeed, there was twice as much used space as there was "free space".

Some further digging I discovered that the TABLESPACE_SIZE field includes the max autoextensible size of the tablespace. In this case, I never altered the max size (It was still 2GB), however 4GB was used. The view was indeed showing 195% utilization. One simple alter later to turn the autoextend off and the view nows shows the correct utilization of the tablespace and the EM alert has cleared.

What I'm not entirely sure of at this point is if this is working as designed or a bug. I would expect the autoextensible size to be ignored if the actual size of the tablespace were larger. I'll have to check what happens in a database.

For a good explanation of this somewhat undocumented view, check out Mr Ed's blog and his writing on the DBA_TABLESPACE_USAGE_METRICS view.

I thought I would add a quick example of the issue seen to better illustrate.
SQL> create tablespace test datafile '/oracle/data01/db960/test_01.dbf' size 8M
 autoextend on next 8M maxsize 128M;

Tablespace created.

SQL> create table test_table (test_column number(1)) tablespace test storage(initial 128M);

Table created.

SQL> select tablespace_name,used_space*8192/1024/1024 used_mb ,
 tablespace_size*8192/1024/1024 size_mb,used_percent from dba_tablespace_usage_metrics
 where tablespace_name = 'TEST';

------------------------------ ----------- ----------- ------------
TEST                                     0         128            0

SQL> select tablespace_name,used_space*8192/1024/1024 used_mb
     ,tablespace_size*8192/1024/1024 size_mb,used_percent
    from dba_tablespace_usage_metrics where tablespace_name = 'TEST';

------------------------------ ----------- ----------- ------------
TEST                                   128         128          100

SQL> select sum(bytes)/1024/1024 from dba_free_space where tablespace_name = 'TEST';


SQL> select sum(bytes)/1024/1024 from dba_data_files where tablespace_name = 'TEST';


As you can see, the autoextensibility causes the DBA_TABLESPACE_USAGE_METRICS tablespace to report bad results.

UPDATE: 2013-10-04
I made a slight correct to the headings of the columns as the calculations were in MB not KB. Also, I never posted the MOS Bug # on this but this was a or bug in the view. It has since been corrected in and I haven't seen this behavior since.

Friday, April 18, 2008

Rotate your Logs ... Two birds with one stone

I have been struggling with getting a custom Grid Control (10g Release 3) report to a) Run on a schedule successfully and b) email it to me for several weeks. I spent my customary time troubleshooting the problem myself, scouring metalink etc and finally resorted to opening an SR with Oracle Support. Two weeks later, and many "run opatch lsinventory -detail" or telling me to check my proxy settings (which I did on my own several times) tasks later, still no resolution. I decided to shelve the problem and move on to more pressing matters...

Enter the hand-off of the oncall pager to myself. I was informed that we were getting some late night pages from Grid Control regarding the HTTP_Server within the Oracle Management Station (OMS) itself. I had some free time (and my typical anal-retentive problem solving) and dove into the issue. I was fairly positive our 300 or so target Grid Control environment didn't currently have 10,000 active HTTP requests (and counting). I stumbled upon Metalink Document 436690.1 which seemed like a perfect fit. Apparently the Apache requests handled by port 1159 are not on any sort of log rotation and the metrics get screwed up when the access_log reaches 2GB. I performed a quick look at the access_log and low and behold, it was sitting at 2+GB. A quick change to the http_em.conf and httpd_em.conf.template and cycle of the OMS was in completed. My cursory look at the metrics that were causing the pages looked awesome. No whacked out , increasing values. Out of nowhere, I started receiving emails of the custom, scheduled report from above. It was very cool to see them working. A check of the scheduled report jobs, and they were now all succeeding.

I'll wait until early next week to update the SR and close the SR with Oracle Support. While they didn't really help me solve my problem and I did 99% of the troubleshooting myself, I did learn about the emdiag toolkit and found some interesting items in that log.

Next challenge, to get a SQL Server plug-in deployed to a server through 2 firewalls. (Agent is already running with some interesting /etc/hosts work-arounds due to DNS limitations).

Monday, February 4, 2008

Inaugural Post, Prestidigitation of Oracle beta 0.1

I have been contemplating starting my own blog for a few weeks now. At first, wondering if it made any sense to add to the already cluttered Oracle Blogspace. After some serious thinking, I decided, 'why not'. I have a lot of opinions and ideas to offer on a myriad of topics. There's always room for a few more in this space.

The first hurdle was to come up with a name. Sure, I could just use my name, or some variation of my nicknames I've accumulated over the years, but I wanted something new, something bordering on clever. This post initiates the "Prestidigitation of Oracle' blog. The Parlour of Prestidigitation was always one of my favorite rooms at the Magic Castle. I've always been fascinated by the word itself. It seems fitting for a blog that will more than likely focus on Oracle topics and concepts be revered for its mysticism. Seeing as it one of the most highly customizable, tunable, robust database platforms out there.

So, without further ado, I present Beta 0.1 of Prestidigitation of Oracle
. I plan on focusing on Oracle topics and things I do on a daily basis for now, with some possible anecdotes on life thrown in.