help-bash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Help-bash] lock redicted file


From: Stephane Chazelas
Subject: Re: [Help-bash] lock redicted file
Date: Fri, 5 Jul 2019 19:27:53 +0100
User-agent: NeoMutt/20171215

2019-07-05 13:06:58 -0500, Peng Yu:
[...]
> If I run the following two commands in two separate bash sessions
> concurrently, output.txt will contain messed up results from both awk
> runs.
> 
> awk -e 'BEGIN { for(i=1;i<10;++i) { print 100+i; system("sleep 1"); }
> }' > output.txt
> awk -e 'BEGIN { for(i=1;i<10;++i) { print i; system("sleep 1"); } }' >
> output.txt
> 
> Is there a bash syntax to lock output.txt that ensuring the file
> redirected by ">" is not written concurrently by more than one
> process? Thanks.
[...]

Not bash-specific, but you can use:


{ flock 3; awk ... > output.txt; } 3<> output.txt

Where flock first waits until it can get an exclusive lock on
the file.

With 3<> output.txt we do not truncate the file but still
create it if it doesn't exist (3>> would also work).

You could also just error out if the lock is already taken:

if flock -n 3; then
  awk ... > output.txt
else
  echo >&2 "something else is locking the file"
fi 3<> output.txt

flock is not a standard command but is found on several systems.

I don't think bash has builtin support for file locking. zsh
does with its zsystem dynamically loadable builtin
(http://zsh.sourceforge.net/Doc/Release/Zsh-Modules.html#Builtins)

-- 
Stephane




reply via email to

[Prev in Thread] Current Thread [Next in Thread]