Oracle Linux: Insufficient memory to auto-enable kdump

Remember how I said “you should be all set”, last week? Turns out, I was only partially right. After creating a local user account, Linux also configures kdump, the kernel crash dumping mechanism. When attempting to do so, it returned this error message:

Insufficient memory to auto-enable dump. Use system-config-kdump to configure manually

As problems go, this one is fairly minor. Kdump does not need to be enabled or configured for everything else to work. I made a note of the issue, and clicked right through the error, and logged in. That said, I really don’t like leaving stuff like that without looking into resolving it. I immediately grokked that system-config-kdump refers to a shell command, opened terminal, and entered the command. It opened this window:

KDump config window

I proceeded to click “Enable”, and then “Apply”. The system asked for a reboot, and then prompted for the root password, three times. After that, it returned the following error:

Starting kdump:[FAILED]

At a hunch, I increased the kdump memory to 160 MB, then tried to apply once more. After being prompted for root password thrice, the settings were saved. Peculiarly, kdump memory was set to its original 128 MB. Following a reboot, I checked, and kdump was up and running, kdump memory still set to 128 MB.

 

 

Starting out with Oracle Linux

I have been wanting to learn more about Linux for some time now, and the time has come to transform a desire to action. I have landed on Oracle Linux, for reasons that will soon become clear. As I don’t have any machine to dedicate to this, and because I want to have the ability to take snapshots and revert to previous states, I will be running Linux as a virtual machine.

For a while to come, you will see posts like this, all stored in the Linux101 Category, where I chronicle the specific challenges, annoyances and problems I run into, as well as their solutions, when I find them.

I have some experience in running virtual machines, and prefer using VirtualBox for such pursuits, because it is intuitive to set up, and is free to use (that is free as in freedom, AND as in beer). All Oracle software resources are distributed from the Oracle Technology Network, OTN. Using it is free, though I gather some resources are not available (more on this later, when and if I find out more).

In order to install Oracle Linux, you naturally need to download the installation images, distributed as a .iso disk image. To install Oracle Linux, you only need a single file. As I am running Oracle Linux 6, Update 5, this file is V41362-01.iso. There are four other iso-files available from OTN. At this point, we do not need those.

For future reference, here’s how I set my environment up:

  • Download the installer from OTN
  • In VirtualBox, create a new virtual machine, setting it up as an Linux, Oracle (64 bit) system
  • I gave it 1600 MB RAM, and a 55 GB Dynamically Allocated VDI Harddrive
  • Booting the virtual machine, I installed the OS from the downloaded installer
  • Going through the installer, I kept options at default, setting location, keyboard and root password as appropriate
  • Wanting to have a GUI, I opted for a Desktop install

Having done all that, you need to wait for a while; the installation procedure takes anywhere from three to fifteen minutes. Once the VM has rebooted, you need to create a local user account (in addition to root), and that’s it, you’re all set!

The importance of CSI

CSI – that’s Continual Service Improvement, by the way, not Crime Scene Investigation – is, to my mind, the single most important stage in the ITIL service life cycle. It evaluates what has gone before, identifies areas for improvement, and aids in the implementation of improving. In an ideal situation, CSI informs all the other stages, and is the main driver for the service life cycle.

The basic premise and assumption made, is that there is always room for improvement, revision and change. By tightly controlling it, using metrics, KPIs (Key Performance Indicators) and CSFs (Critical Success Factors), we move ourselves, our business and our customers onward to better reliability and service.

A recent article in the Norwegian press dealt with the retirement from professional shooting of the premier Norwegian Skeet shooter, and London 2012 Silver Medalist, who used the opportunity to send some pretty scathing criticism in the direction of the leadership for the National team, saying that the manager was “the biggest amateur of them all”. In response to this, the manager of the national shooting team commented:

Following the evaluation of the last season, the most successful in several years, the shooters’ association found no reason to make any changes. That can’t be amateurish.

Now, the premise that the manager lays down is that, because they were unable to find any areas for improvement, they are not amateurs. My contention would be that they a) are amateurs and b) haven’t been looking hard enough. Sure, the current premier shooters may be as good as they will ever get, but that doesn’t mean that nothing can be changed or improved.

The arrogance of saying “we can’t find anything to improve, so we must be professionals” is staggering, and indicates to me that he is, indeed an amateur. Imagine if the foremost athletes in the world, on winning an Olympic gold medal, were to say “well that’s that, then, I have no room for improvement now”, how would the world of sports look?

No, we must define metrics, measure them and compare them to KPIs and CSFs. That is the way forward, onward and upward. Following the national team manager’s lead means that we can only ever be as good as we are, and will likely become worse.

List user home folder size

Users tend to store all kinds of crud on their network home folders, which can be a constant source of frustration for SysAdmins. Luckily, it is fairly easy to get a list of the size of each folder, using a Powershell script. The script has already been made for us. It is discussed in detail here, and can be downloaded from here. You could do this from your local computer, or by remote desktop to the remote computer. The procedure is the same.

      1. Open File Explorer
      2. Navigate to the root folder for home folders
      3. Copy the script to that folder
      4. Start Powershell using an account that has the necessary (typically an admin account)
      5. In Powershell, Navigate to the root folder for home folders
      6. Run the script, using this command: .\Get-DirStats.ps1 >> c:\users\XXXX\Desktop\List.txt

(Replace XXXX with your user folder)

The output is a list of all user folders, by name. To sort it, simply import it to Excel.

The folder size is listed in Bytes. To get around this, simply divide them all by 1073741824 (individually, of course).

Who protects you?

Since 2011, EFF, the Electronic Frontier Foundation,  has published an annual report, “Who has your back“, detailing how a number of different companies deal with government data requests. 2014 is no exception, and the report came out a few weeks ago.

New to the report are Adobe, the Internet Archive and Snapchat, to mention a few, and everyone on the list have been checked for updates since the last report. In general, most companies protect you fairly well, with Apple, Dropbox, Google and a few others receiving full marks.

The report is an interesting read, if you care about these things, and worth a gander.

Script output to file: Append -v- Replace

By default, when running a script, the output is shown to you in the command line interface. Most of the time, that is exactly what you want. Some times, though, you are more interested in logging than in real-time data. The solution is to have the output sent to a file.

As an example, imagine the following: You want to see where the traffic to a server ends, so you run a traceroute to find out. Normally, the command would like like this: tracert server.dom

Here’s what you would use to route the output to a text file: tracert server.dom > textfile.txt.

Now, that’s all well and good, until you want to add more logs to the same file. The problem is that a single > will replace the contents of the file in question with the new information. To append, rather than replace, use a double >, like so: tracert server.dom >> textfile.txt.

Defining SMART goals and objectives

When running a project, its ultimate success or failure can only be defined when measured against predefined goals and objectives. Definition of these takes place during the planning phase of the project. It is imperative that these goals be SMART. By that I mean:

  • Specific
  • Measurable
  • Achievable
  • Relevant
  • Time constrained

Hughes and Cottrell write (2009): “A project can be a success on delivery but then be a business failure. On the other hand, a project could be late and over budget, but its deliverables could still, over time, generate benefits that outweigh the initial expenditure.” A few paragraphs later, they write something that gave me cause for pause: “Because the focus of project management is, not unnaturally, on the immediate project, it may not be seen that the project is actually one of a sequence.” What we learn from this is that a sense of history, of what has come before, is important, and potentially imperative, to the success of the project at hand, as well as its successors.

This is where the definition of SMART goals and objectives comes in; by measuring the product of a project by these, by defining goals and objectives for both the project and the business, and by letting these be informedby what has come before, we get better projects, delivering better end results.

Hughes, B and Cotterell, M (2009) Software Project Management, London, McGraw Hill

Link without improving the link target’s search engine ranking

You’ve done your research, and you are ready to publish an article which leaves the object of the article little or no honor. In order to prove your point, you need to link to a website run by the object of your article. You stop, and feel uneasy, as doing so will improve their search engine ranking.

That is how I felt a few months back, when I wrote this article. I felt that the article I was criticizing had little or no merit, and that the articles on the site generally were of low quality. I certainly didn’t want my article to help them do what they were doing.

Luckily, there is a service for that, called Do Not Link. Rather than using the direct link to the site, you use their service, and get a shortcut URL, which directs the reader to Do Not Link, who in turn shows you the page, while adding a banner to the top.

All in all, I think this is a good way of linking, while avoiding adding to someone’s online credibility.

Disable Facebook’s automatic play feature in the iOS app

Last week, I showed you how to disable Facebook’s automatic play feature when using Facebook in the browser. However, this annoying feature is also automatically enabled in the iOS (and I assume Android) apps. Luckily, they can be disabled there, too. Here’s how:

  • Open Settings (that’s from the Home screen, not from Facebook), then scroll down to Facebook
  • Go to Settings
  • Under Video, go to Auto-Play
  • Set the setting you want

The app offers three settings, On, Wi-Fi only and Off.

Disable Facebook’s auto-playing videos feature in the browser

Facebook recently added a new feature; whenever you scroll to a video in your timeline; it automatically starts playing. If you like it; well, good for you. You have no need for this. If, however, you, like me, think it’s an annoyance, here’s how to disable it:

  1. On the right-hand side of the facebook top menu, click the downward pointing arrow, then Settings
  2. Under Videos, you will find a setting called “Auto-Play Videos” – set that to “Off”

Annoyance stopped. Enjoy!