Description
This warm and elegant black winter coat is ideal for winter. It's the ideal design for the season from Naf Naf!
Information
Closure :

   Button fastening

Composition :

   Wool : 65%

   Polyester : 25%

   Acrylic : 5%

   Polyamide : 3%

   Viscose : 2%

Lining composition :

   Polyester : 100%

Care advice :

Anais Naf Naf Anais Black Black Naf Black Anais Anais Naf Black wHgqCFpx Anais Naf Naf Anais Black Black Naf Black Anais Anais Naf Black wHgqCFpx Anais Naf Naf Anais Black Black Naf Black Anais Anais Naf Black wHgqCFpx Anais Naf Naf Anais Black Black Naf Black Anais Anais Naf Black wHgqCFpx Anais Naf Naf Anais Black Black Naf Black Anais Anais Naf Black wHgqCFpx Anais Naf Naf Anais Black Black Naf Black Anais Anais Naf Black wHgqCFpx Anais Naf Naf Anais Black Black Naf Black Anais Anais Naf Black wHgqCFpx Anais Naf Naf Anais Black Black Naf Black Anais Anais Naf Black wHgqCFpx
Azalea Asics Red Jacket Woven Asics Woven Jacket Jacket Asics Red Asics Woven Azalea Red Woven Azalea Jacket xPHwqpaZw
Search:     Advanced search
Article ID: 226
Last updated: 22 Aug, 2018
UPDATE IN PROGRESS: We have recently installed new filesystems, nbp10–15, which have a new feature called progressive file layout that automatically increases the striping of a file as it gets larger. This means that you should not have to set striping manually. We are in the process of updating this article to reflect this change. The default stripe count for filesystems nbp1–8 remains 4.

At NAS, Lustre (/nobackup) filesystems are shared among many users and many application processes, which can cause contention for various Lustre resources. This article explains how Lustre I/O works, and provides best practices for improving application performance.

How Does Lustre I/O Work?

When a client (a compute node from your job) needs to create or access a file, the client queries the metadata server (MDS) and the metadata target (MDT) for the layout and location of the file's stripes. Once the file is opened and the client obtains the striping information, the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and object storage targets (OSTs) to perform I/O operations such as locking, disk allocation, storage, and retrieval.

If multiple clients try to read and write the same part of a file at the same time, the Lustre distributed lock manager enforces coherency so that all clients see consistent results.

Jobs being run on Pleiades contend for shared resources in NAS's Lustre filesystem. Each server that is part of a Lustre filesystem can only handle a limited number of I/O requests (read, write, stat, open, close, etc.) per second. An excessive number of such requests, from one or more users and one or more jobs, can lead to contention for storage resources.. Contention slows the performance of your applications and weakens the overall health of the Lustre filesystem. To reduce contention and improve performance, please apply the examples below to your compute jobs while working in our high-end computing environment.

Best Practices

Avoid Using ls -l

The ls -l command displays information such as ownership, permission, and size of all files and directories. The information on ownership and permission metadata is stored on the MDTs. However, the file size metadata is only available from the OSTs. So, the ls -l command issues RPCs to the MDS/MDT and OSSes/OSTs for every file/directory to be listed. RPC requests to the OSSes/OSTs are very costly and can take a long time to complete if there are many files and directories.

  • Use ls by itself if you just want to see if a file exists
  • Use ls -l filename if you want the long listing of a specific file

Avoid Having a Large Number of Files in a Single Directory

Opening a file keeps a lock on the parent directory. When many files in the same directory are to be opened, it creates contention. A better practice is to split a large number of files (in the thousands or more) into multiple subdirectories to minimize contention.

Avoid Accessing Small Files on Lustre Filesystems

Accessing small files on the Lustre filesystem is not efficient. When possible, keep them on an NFS-mounted filesystem (such as your home filesystem on Pleiades /u/username) or copy them from Lustre to /tmp on each node at the beginning of the job, and then access them from /tmp.

Use a Stripe Count of 1 for Directories with Many Small Files

If you must keep small files on Lustre, be aware that stat operations are more efficient if each small file resides in one OST. Create a directory to keep small files in, and set the stripe count to 1 so that only one OST will be needed for each file. This is useful when you extract source and header files (which are usually very small files) from a tarfile. Use the Lustre utility lfs to create a specific striping pattern, or find the striping pattern of existing files.

pfe21% mkdir dir_name
pfe21% lfs setstripe -c 1 dir_name
pfe21% cd dir_name
pfe21% tar -xf tar_file

If there are large files in the same directory tree, it may be better to allow them to stripe across more than one OST. You can create a new directory with a larger stripe count and copy the larger files to that directory. Note that moving files into that directory with the mv command will not change the stripe count of the files. Files must be created in or copied to a directory to inherit the stripe count properties of a directory.

pfe21% mkdir dir_count_4
pfe21% lfs setstripe -c 4 dir_count_4
pfe21% cp file_count_1 dir_count_4

If you have a directory with many small files (less than 100 MB) and a few very large files (greater than 1 GB), then it may be better to create a new subdirectory with a larger stripe count. Store just the large files and create symbolic links to the large files using the symlink command ln.

pfe21%  mkdir dir_name
pfe21%  lfs setstripe -c Anais Anais Black Naf Anais Black Anais Naf Naf Black Black Naf 16 dir_name
pfe21%  ln dir_name/large_file  large_file

Keep Copies of Your Source Code on the Pleiades Home Filesystem and/or Lou

Be aware that files under /nobackup are not backed up. Make sure that you save copies of your source codes, makefiles, and any other important files on your Pleiades home filesystem. If your Pleiades home directory quota isn't large enough to keep all of these files, you can request a larger quota and/or create tarred copies of these files on Lou.

Avoid Accessing Executables on Lustre Filesystems

There have been a few incidents on Pleiades where users' jobs encountered problems while accessing their executables on the /nobackup filesystem. The main issue is that the Lustre clients can become unmounted temporarily when there is a very high load on the Lustre filesystem. This can cause a bus error when a job tries to bring the next set of instructions from the inaccessible executable into memory.

Executables run slower when run from the Lustre filesystem. It is best to run executables from your home filesystem on Pleiades. On rare occasions, running executables from the Lustre filesystem can cause executables to be corrupted. Avoid copying new executables over existing ones of the same name within the Lustre filesystem. The copy causes a window of time (about 20 minutes) where the executable will not function. Instead, the executable should be accessed from your home filesystem during runtime.

Increase the Stripe Count for Parallel Access to the Same File

The Lustre stripe count sets the number of OSTs the file will be written to. When multiple processes access blocks of data in the same large file in parallel, I/O performance may be improved by setting the stripe count to a larger value. However, if the stripe count is increased unnecessarily, the additional metadata overhead can degrade performance for small files.

By default, the stripe count is set to 4, which is a reasonable compromise for many workloads while still providing efficient metadata access (for example, to support the ls -l command). However, for large files, the stripe count should be increased to improve the aggregate I/O bandwidth by using more OSTs in parallel. In order to achieve load balance among the OSTs, we recommend using a value that is an integral factor of the number of processes performing the parallel I/O. For example, if your application has 64 processes performing the I/O, you could test performance with stripe counts of 8, 16, and 32.

TIP: To determine which number to start with, find the approximate square root of the size of the file in GB, and test performance with the stripe count set to the integral factor closest to that number. For example, for a file size of 300 GB the square root is approximately 17; if your application uses 64 processes, start performance testing with the stripe count set to 16.

Restripe Large Files

If you have other large files, make sure they are adequately striped. You can use a minimum of one stripe per 100 GB (one stripe per 10 GB is recommended), up to a maximum stripe count of 120. If you plan to use the file as job input, consider adjusting the stripe count based on the number of parallel processes, as described in the previous section.

If you have files larger than 15 TB, please contact User Services for more guidelines specific to your use case.

We recommend using the shiftc tool to restripe your files. For example:

  1. Run ls -lh to view the size of the file(s):
    % ls -lh data/large_file data/huge_file
    -rw-rw---- 1 zsmith  g1001 555G Apr 14 22:21 data/large_file
    -rw-rw---- 1 zsmith  g1001 3.2T Apr 14 22:21 data/huge_file
    

    When a file is less than 1,200 GB, simply use one stripe per 10 GB. For a larger file, you can specify a maximum stripe count of 120.

  2. Use shiftc to copy the file to a new file with this number of stripes:
    % shiftc --stripe=10g large_file large_fileNaf Naf Black Black Anais Black Naf Anais Anais Anais Black Naf .restripe
    % shiftc --stripe=120 huge_file huge_file.restripe
    
  3. Verify that the file was successfully copied. You should receive an email report generated by the shiftc command, or you can run shiftc --status. In the email or status output, check that the state of the operation is “done”.

  4. Move the new file in place of the old file:
    % mv large_file.restripe large_file
    % mv huge_file.restripe huge_file
    
Portomousse Soon See U Black See U TIwv1x0qT

For more information, see Using Shift for Local Transfers and Tar Operations.

Stripe Files When Moving Them to a Lustre Filesystem

When you copy large files onto the Lustre filesystems, such as from Lou or from remote systems, be sure to use a sufficiently increased stripe count. You can do this before you create the files by using the lfs setstripe command, or you can transfer the files using the shiftc tool, which automatically stripes the files.

Note: Use mtar or shiftc (instead of tar) when you create or extract tar files on Lustre.

See the following articles for more information:

Limit the Number of Processes Performing Parallel I/O

Given that the numbers of OSSes and OSTs on Pleiades are about a hundred or fewer, there will be contention if a large number of processes of an application are involved in parallel I/O. Instead of allowing all processes to do the I/O, choose just a few processes to do the work. For writes, these few processes should collect the data from other processes before the writes. For reads, these few processes should read the data and then broadcast the data to others.

Experiment with Different Stripe Counts/Sizes for MPI Collective Writes

For programs that call MPI collective write functions, such as MPI_File_write_all, MPI_File_write_at_all, and MPI_File_write_ordered, it is important to experiment with different stripe counts on the Lustre /nobackup filesystems in order to get good performance.

Background

MPI I/O supports the concept of collective buffering. For some filesystems, when multiple MPI processes are writing to the same file in a coordinated manner, it is much more efficient for the different processes to send their writes to a subset of processes in order to do a smaller number of bigger writes. By default, with collective buffering, the write size is set to be the same as the stripe size of the file.

With Lustre filesystems, there are two main factors in the SGI MPT algorithm that chooses the number of MPI processes to do the writes: the stripe count and number of nodes. When the number of nodes is greater than the stripe count, the number of collective buffering processes is the same as the stripe count. Otherwise, the number of collective buffering processes is the largest integer less than the number of nodes that evenly divides the stripe count. In addition, MPT chooses the first rank from the first n nodes to come up with n collective buffering processes.

Note: Intel MPI behaves similarly to SGI MPT on Lustre filesystems.

Enabling Collective Buffering Automatically

You can let each MPI implementation enable collective buffering for you, without any code changes.

SGI MPT automatically enables collective buffering for the collective write calls using the algorithm described above. This method requires no changes in the user code or in the mpiexec command line. For example, if the stripe count is 1, only rank 0 does the collective writes, which can result in poor performance. Therefore, experimenting with different stripe counts on the whole directory and/or individual files is strongly recommended.

Intel MPI also does collective buffering, similar to SGI MPT, when the I_MPI_EXTRA_FILESYSTEM and I_MPI_EXTRA_FILESYSTEM_LIST variables are set appropriately, as follows:

mpiexec.hydra  -env I_MPI_EXTRA_FILESYSTEM on \
        -env  I_MPI_EXTRA_FILESYSTEM_LIST lustre \
        -np xx a.out

Enabling Collective Buffering via Code Changes

In this method, you provide "hints" in the source code to inform MPI what to do with specific files. For example:

   call MPI_Info_create(info, status)
   call MPI_Info_setNaf Anais Anais Anais Black Black Black Anais Black Naf Naf Naf (info, "romio_cb_write", "enable", STATUS)
   call MPI_Info_set(info, "striping_unit", "1048576", STATUS)
   call MPI_Info_set(info, "striping_factor", "16", STATUS)
   ...
   call MPI_File_open(MPI_COMM_WORLD, file_name, MPI_MODE_WRONLY, info, unit, status)

Note: The hints are only advisory and may not be honored. For example, SGI MPT 2.12r26 honors these hints, but MPT 2.14r19 does not. Intel MPI 5.0x honors these hints when the I_MPI_EXTRA_FILESYSTEM and I_MPI_EXTRA_FILESYSTEM_LIST variables are set appropriately, as follows:

mpiexec.hydra  -env I_MPI_EXTRA_FILESYSTEM on \
        -env  I_MPI_EXTRA_FILESYSTEM_LIST lustre \
        -np xx a.out

Naf Anais Naf Anais Naf Black Black Naf Anais Black Black Anais Stripe Align I/O Requests to Minimize Contention

Stripe aligning means that the processes access files at offsets that correspond to stripe boundaries. This helps to minimize the number of OSTs a process must communicate for each I/O request. It also helps to decrease the probability that multiple processes accessing the same file communicate with the same OST at the same time.

One way to stripe-align a file is to make the stripe size the same as the amount of data in the write operations of the program.

Avoid Repetitive "stat" Operations

Some users have implemented logic in their scripts to test for the existence of certain files. Such tests generate "stat" requests to the Lustre server. When the testing becomes excessive, it creates a significant load on the filesystem. A workaround is to slow down the testing process by adding sleep in the logic. For example, the following user script tests the existence of the files WAIT and STOP to decide what to do next.

touch WAIT
 rm STOP

 while ( 0 <= 1  )
  if(-e WAIT) then
    mpiexec ...
    rm WAIT
  endif
  if(-e STOP) then
    exit
  endif
 end

When neither the WAIT nor STOP file exists, the loop ends up testing for their existence as quickly as possible (on the order of 5,000 times per second). Adding sleep inside the loop slows down the testing.

touch WAIT
 rm STOP

 while ( 0 <= 1  )
  if(-e WAIT) then
    mpiexec ...
    Anais Naf Black Naf Anais Anais Black Black Black Anais Naf Naf rm WAIT
  endif
  if(-e STOP) then
    exit
  endif
  sleep Naf Black Black Black Naf Naf Black Anais Anais Anais Anais Naf 15
 end

Avoid Having Multiple Processes Open the Same File(s) at the Same Time

On Lustre filesystems, if multiple processes try to open the same file(s), some processes will not able to find the file(s) and your job will fail.

The source code can be modified to call the sleep function between I/O operations. This will reduce the occurrence of multiple, simultaneous access attempts to the same file from different processes.

100  open(unit,file='filename'2019 Womens Originals Adidas Red Home 2018 Shirt Spain EwgnO7nq,IOSTAT=ierr)
      if (ierr.ne.0) then
       ...
      call Black Anais Naf Black Naf Anais Black Black Anais Naf Naf Anais sleep(Naf Black Naf Anais Anais Anais Naf Black Anais Naf Black Black 1)
      go to 100
      endif

When opening a read-only file in Fortran, use ACTION='read' instead of the default ACTION='readwrite'. The former will reduce contention by not locking the file.

open(unit,file='filename',ACTION='READ',IOSTAT=ierr) 

Avoid Repetitive Open/Close Operations

Opening files and closing files incur overhead and repetitive open/close should be avoided.

If you intend to open the files for read only, make sure to use ACTION='READ' in the open statement. If possible, read the files once each and save the results, instead of reading the files repeatedly.

If you intend to write to a file many times during a run, open the file once at the beginning of the run. When all writes are done, close the file at the end of the run.

Red 2014 Adidas Spain World Football Originals Shorts kids Cup 15 Home w7wgAqvxn

See Lustre Basics for more information.

Use the Soft Link to Refer to Your Lustre Directory

Your /nobackup directory is created on a specific Lustre filesystem, such as /nobackupp7 or /nobackupp8, but you can use a soft link to refer to the directory no matter which filesystem it is on:

/nobackup/your_username

By using the soft link, you can easily access your directory without needing to know the name of the underlying filesystem. Also, you will not need to change your scripts or re-create any symbolic links if a system administrator needs to migrate your data from one Lustre filesystem to another.

Preserve Corrupted Files for Investigation

When you notice a corrupted file in your /nobackup directory, it is important to preserve the file to allow NAS staff to investigate the cause of corruption. To prevent the file from being accidentally overwritten or deleted by your scripts, we recommend that you rename the corrupted file using:

pfe% mv filename filename.corrupted

Note: Do not use cp to create a new copy of the corrupted file.

Report the problem to NAS staff by sending an email to support@nas.nasa.gov. Include how, when, where the corrupted file was generated, and anything else that may help with the investigation.

Reporting Problems

If you report performance problems with a Lustre filesystem, please be sure to include the time, hostname, PBS job number, name of the filesystem, and the path of the directory or file that you are trying to access.Your report will help us correlate issues with recorded performance data to determine the cause of efficiency problems.

This article was:  
1
3
5
8
10
Anais Naf Naf Anais Black Black Black Anais Black Naf Anais Naf
Also read
Lustre Basics
Pleiades Lustre Filesystems


Contact Us

General User Assistance

Security

Woven Adidas Originals 2019 2018 Anthem Jacket Zne Black Argentina XPprdpwqx
  • Report security issues 24x7x365
  • Toll-free: 1-877-NASA-SEC (1-877-627-2732)
  • E-mail: soc@nasa.gov

User Documentation

High-End Computing Capability (HECC) Project Office

Quiksilver Marine Everyday Quiksilver Kelvin Kelvin Everyday r6qR4wrU

NASA High-End Computing Program

Tell Us About It

Our goal is furnish all the information you need to efficiently and effectively use the HECC resources needed for your NASA computational projects.

We welcome your input on features and topics that you would like to see included on this website.

Please send us email with your wish list and other feedback.