UNIX Hints & Hacks

ContentsIndex

Chapter 1: Topics in Administration

 

Previous ChapterNext Chapter

Sections in this Chapter:

1.1 Collecting System Information

 

1.7 Swap on-the-Fly

 

1.13 Remove the ---- Dashes ----

1.2 Backup Key Files!

 

1.8 Keep It Up with nohup

 

1.14 echo Does ls

1.3 Execution on the Last Day of a Month

 

1.9 Redirecting Output to Null

 

1.15 Building Large Dummy Files

1.4 Dealing with Unwanted Daemons

 

1.10 Keeping Remote Users Out

 

1.16 Burning-in Disk Drives

1.5 Keep Those Daemons Running

 

1.11 Rewinding Tapes Fast

 

1.17 Bringing a System Down

1.6 fuser Instead of ps

1.12 Generating a Range of Numbers

 

 

1.15 Building Large Dummy Files

1.15.1 Description

1.15.1 Description

Create large files, up to or even larger than 100MB, for testing various system functions.

Example One: dd

Flavors: All

Shells: All

Syntax:

dd if=file of=file bs=n count=n

The dd command has many uses. Not only will it convert files but it will also copy files. So where do you find a file over 100MB to copy or convert with dd?

Zero. Zero? Yes, there is a wonderful device called /dev/zero. This device reads from a special file that always returns a buffer full of zeros. The best thing about it is that you can use an endless amount:

% dd if=/dev/zero of=100megs bs=10000 count=10000
100+0 records in
100+0 records out
% ls -al 100megs -rw-r--r-- 1 foo staff 100000000 Sep 26, 01:48 100megs

This dd command takes 10,000 blocks of buffered zeros and copies it 10,000 times into the file called 100megs. In no time you will have a file that is exactly 100MB. The numbers can be tweaked to create a file even larger or smaller, depending on your needs.

Example Two: Scripting dd

Flavors: All

Shells: All

Syntax:

bingfile.sh n

A quick one-line shell script called bigfile.sh can be written to pass any size in megabytes to the dd command:

dd if=/dev/zero of=${1}megs bs=10000000 count=$1

Line 1: Creates the file using a 1MB block the same number of times the script was passed.

The value, 100, is passed to the bigfile.sh script and the dd command creates a file called 100megs with 100 blocks, each 1MB in size, of buffered zeros.

% bigfile.sh 100
100+0 records in
100+0 records out
% ls -al 100megs -rw-r--r-- 1 foo staff 100000000 Sep 26, 02:02 100megs

Any value can now be passed to the script and that exact size in megabytes will be created. This allows you the versatility to build files of any size quickly.

Example Three: The Perl Way

Flavors All:

Shells: Perl

Syntax:

bigfile.pl [n]

In this method you can use Perl to generate the 100MB file. The script fills up a file with asterisks ( *) to the exact size in megabytes that is passed to the script. It then names the file after the size that was passed to it.

#! /usr/local/bin/perl
$SIZE=shift(@ARGV); $LIST="";
open (FILE, "> megfile"); { for ($CNT = 0; $CNT < 100000; $CNT++ ) { print FILE "**********"; } } close(FILE);
for ($CNT = 0; $CNT < $SIZE; $CNT++ ) { $LIST="$LIST megfile" }
`cat $LIST > ${SIZE}megs`;

Line 1: Define the location and Perl script being used.

Line 3: Read in the size, in megabytes, that the file will be.

Line 4: Null the 1MB file count variable.

Lines 6-11: Create the first 1MB and call it megfile.

Lines 13-14: Duplicate the 1MB file the same number of times as the amount that is being passed to the program into the file count variable.

Line 16: Perform a cat and duplicate the 1MB file into the final file that will contain it.

To have this script generate a 100MB file, type the following command:

% bigfile.pl 100
% ls -al 100*
-rw-r--r--     1 foo staff      100000000 Sep 26, 02:55 100megs

The script automatically builds a 1MB file and copies it 100 times. The file that results is called 100megs.

Reason

There is always a need for creating large files. The most common is for testing purposes. You always need to test new disks, controllers, SCSI buses, and network bandwidth timing issues.

Real World Experience

There is nothing worse than to see a disk drive about to die. On occasion, you might see a read/write IO error on your console or in your system logs, but the disks appear to be fine. One simple test--moving a very large file across filesystems, through controllers, and SCSI buses--can help diagnose where the problem resides. Small files often can't make the problem appear.

Creating a very large file and using ftp, rcp, or NFS to copy the file across the network helps in monitoring the network traffic and bandwidth when a packet sniffer is attached to a network. In diagnosing problems on a network, small files aren't enough sometimes to see where the problem exists. At times there is so much traffic moving across the network that small transfers can be the needle in a haystack while trying to sniff for the problem.

Other Resources

Man pages:

dd, zero

UNIX Hints & Hacks

ContentsIndex

Chapter 1: Topics in Administration

 

Previous ChapterNext Chapter

Sections in this Chapter:

1.1 Collecting System Information

 

1.7 Swap on-the-Fly

 

1.13 Remove the ---- Dashes ----

1.2 Backup Key Files!

 

1.8 Keep It Up with nohup

 

1.14 echo Does ls

1.3 Execution on the Last Day of a Month

 

1.9 Redirecting Output to Null

 

1.15 Building Large Dummy Files

1.4 Dealing with Unwanted Daemons

 

1.10 Keeping Remote Users Out

 

1.16 Burning-in Disk Drives

1.5 Keep Those Daemons Running

 

1.11 Rewinding Tapes Fast

 

1.17 Bringing a System Down

1.6 fuser Instead of ps

1.12 Generating a Range of Numbers

 

 

© Copyright Macmillan USA. All rights reserved.