Most reliable way to store file contents in an array in bash

Hi Guys,

I have a file which has numbers in it separated by newlines as follows:

1.113
1.456
0.556
0.021
-0.541
-0.444

I am using the following code to store these in an array in bash:

FILE14=data.txt
ARRAY14=(`awk '{print}' $FILE14`)

However this is causing an indexing problem (awk starts with index 1 and bash with index 0)

What is the best way to store these numbers from the file in an array? (I would prefer not using Awk)

Thanks.

FILE14=data.txt
ARRAY14=( $(cat $FILE14 | tr '\n' ' ') )
# testing 
for i in $(seq 0 $((${#ARRAY14[@]}-1)))
do
    echo "i=$i - ${ARRAY14[$i]}"
done

Or (bash):

FILE14=data.txt
ARRAY14=( $(<"$FILE14") )
for (( i=0; i<${#ARRAY14[@]}; i++ )); do
  echo ${ARRAY14}
done

Enumerating alternative:

for i in "${ARRAY14[@]}" ; do
  echo $i
done

Thanks, Scritinizer. I forgot to mention, the original file also has blank lines in it which I want to store in the array as well. How can I modify the script so that I can store the blank values in the array?

In that case I think you would need something like this:

FILE14=data.txt
i=0
while read line; do
  ARRAY14[$i]=$line
  i=$((i+1))
done < "$FILE14"
for (( i=0; i<${#ARRAY14[@]}; i++ )); do
  echo ${ARRAY14}
done

Ok thanks that worked.

You can use mapfile (bash 4.x only).

# for i in {a..z};do echo $i >> infile;done
# unset array

# mapfile -O 1 array < infile
# echo ${array[3]}
c