I’m curious what anyone who reads this blog thinks. My first reaction when someone mentions Ubuntu server is to grab the nearest trout and start slapping. Don’t get me wrong I like Ubuntu. It’s very nice on a workstation, and suitable for my wife, mother, aunt, etc …. But do you really think its good enough for prime time in the data center? According to a server-survey conducted by the Ubuntu marketing team almost 80% of users see Ubuntu as ready for mission critical use.
Monitoring and analyzing performance is an important task for any sysadmin. Disk I/O bottlenecks can bring applications to a crawl. What are IOPS? Should I use SATA, SAS, or FC? How many spindles do I need? What RAID level should I use? Is my system read or write heavy? These are common questions for anyone embarking on an disk I/O analysis quest. Obligatory disclaimer: I do not consider myself an expert in storage or anything for that mater.
As anyone in the search industry can tell you, getting website traffic for your primary keywords and phrases in organic search can mean the difference between the success or failure of your website. If you are a site owner, website failure can have a detrimental impact on the profitability of your company. If instead you are an SEO or web consultant, this can make or break your reputation in the industry.
I don’t think I bothered to complain here but I sure sent my fair share of nasty grams. In fact Dell became one of my four letter words after I heard they were firmware locking Gen11 servers to only dell drives. Of course it was a mistake but I loved to unashamedly repeat the famous quote from Howard Shoobe. “There are a number of benefits for using Dell qualified drives in particular ensuring a positive experience and protecting our data.
Recently a developer came to me and said they are starting to see failed builds apparently due to open file handle limitations on the build server. In case your not aware, by default there are limitations on users to ensure they don’t hog the entire resources of a system. Sometimes these limitations need to be adjusted. In my case the “bamboo” user needed more than 1024 open files on occasion. I determined my system had a maximum number of open files of 1572928.
I’m not a fan of OSX and I try to avoid it with the same veracity that I avoid Windows. But I recently needed to have a Linux NFS export mounted on an OSX server. A simple mount server:/export /mymountpoint didn’t work and returned “Operation not permitted”. After a bit of digging I found the solution. I needed to instruct the client to use a privledged port by adding the “-P” option.
I don’t know how many of you know that I am a recovering gentoo user. One of the staples of my desktop used to be keychain. Keychain is a simple wrapper for ssh-agent and gpg-agent. It eases the use of a single long running agent per system instead of per login session. For some reason this tool had fallen out of my basket when I switched to debian several years ago.
Besides the gui/vnc consoles you can still use the equivlent of xm console in Citrix XenServer. On the host console: xe vm-list to get the list of domins running (just note the uuid of the domain you want). list_domains will list the domain name and the uuid of the domains. Match up your uuid so you get the proper dom_id xm console equivlent is /usr/lib/xen/bin/xenconsole dom_id Its not in the root users $PATH though I think it ought to be.
Don’t even start with me about how telnet is horrid. Out side of my control but I recently had issues trying to enable telnet on a server. Typically its pretty straightforward. yum install telnet-server chkconfig telnet on chkconfig xinetd on service xinetd start Unfortunately for me this was not working. Every time I tried to telnet to the host after enabling it I would get an error message. telnet host Trying 203.
I recently had a need to push out a few settings to a group of iLOMs on new Sun servers. I really despise using a web interface for everything so I took the ssh route. The first thing I tried to do after determining the commands I needed was to shove the commands in with ssh directly. I quickly became apparent that route just wasn’t going to work. When you log into the iLOM “daemons” need to initialize.