AIX 7.1 high page faults

hi guys i hope you can help me with this situation.

i have 2 lpar with aix 7.1 and oracle 11gr2 in grid mode.

when i start nmon to check the current system health i notice that page fault are over 3000/s. than i have opened a case with ibm and they say that the problem is not paging nor pinning nor memory but heavy i/o on disks from an application.

infact i have 30% of free memory and 0.3% of page space used and 16,7& of pinned memory.

now i thinks oracle arc is the cause, but how can determine exactly which application is creating those page faults?

i have tried with noon with t option and then 5(for i/o) but it seems all normally.

any suggest would be appreciated.
Thanks

Are these errors showing up on a console or in system logs? Most Unix systems I have worked with will log page errors to /var/log/messages by default. I usually can see the device or get a PID that I can cross reference against dumps of the ps command or similar.

hi thanks for flash reply :slight_smile:

those errors shows on nmon like this:

  Code    Resource            Stats   Now       Warn    Danger                                                                  
      OK -> CPU               %busy   3.6%      >80%    >90%                                                                    
      OK -> Paging size       %free  99.7%      <20%    <10%                                                                    
      OK -> Paging Space      RAM:pg103.1%      <50%    <10%                                                                    
  DANGER -> Page Faults       faults 1915.0     >16/s   >160/s          

but page faults can up to 100000/s sometimes....

@zer0sig: this might be the case with Linux, but AIX is quite different. There is no /var/log/messages to begin with.

@Thread O/P:

looks like this can't be solved without additional data.

Post the output of:

vmstat -wT 1 (let it run for 20-30 seconds)
iostat 1 | grep -v ' 0\.0' (for the same timespan)
vmstat -vs

I hope this helps.

bakunin

here what you have requested

System configuration: lcpu=8 mem=32768MB

 kthr          memory                         page                       faults           cpu       time  
------- --------------------- ------------------------------------ ------------------ ----------- --------
  r   b        avm        fre    re    pi    po    fr     sr    cy    in     sy    cs us sy id wa hr mi se
  3   0    5419309    2443083     0     0     0     0      0     0    68  23226  9504  5  4 90  0 12:00:31
  2   0    5419345    2443045     0     0     0     0      0     0   152  21588  9172  6  4 88  2 12:00:32
  1   0    5419489    2442900     0     0     0     0      0     0    23  23683  8998  5  4 91  0 12:00:33
  7   0    5419489    2442900     0     0     0     0      0     0    23  30577 10250  6  6 88  0 12:00:34
  5   0    5419488    2442901     0     0     0     0      0     0    45  25708  9020  6  5 89  0 12:00:35
  5   0    5425603    2436785     0     0     0     0      0     0  1636  28791 10364 16  7 77  0 12:00:36
  0   0    5424859    2437528     0     0     0     0      0     0  1916  29239 11410 14  7 77  2 12:00:37
 15   0    5425578    2436802     0     0     0     0      0     0   161  29950  8977 17  7 77  0 12:00:38
  0   0    5424858    2437521     0     0     0     0      0     0   126  31529 10265  9  7 84  0 12:00:39
  4   0    5424858    2437521     0     0     0     0      0     0    72  21206  9001  4  4 92  0 12:00:40
  2   0    5429901    2432476     0     0     0     0      0     0   241  25462  8919 21  6 73  0 12:00:41
  3   0    5430051    2432327     0     0     0     0      0     0   764  29312  9763 26  9 65  0 12:00:42
  3   0    5430131    2432242     0     0     0     0      0     0   210 160845  8870 12  9 79  0 12:00:43
  2   0    5431184    2431196     0     0     0     0      0     0   149 181693  9884 40 16 44  0 12:00:44
  0   0    5432255    2430124     0     0     0     0      0     0    27  94288  9541 11  7 81  0 12:00:45
  8   0    5431184    2431195     0     0     0     0      0     0    40  23088  9296 10  5 85  0 12:00:46
  0   0    5430076    2432302     0     0     0     0      0     0    76  23561  9473  6  4 90  0 12:00:47
  2   0    5433497    2428880     0     0     0     0      0     0   742  25640  9872 24  6 69  1 12:00:48
  2   0    5430101    2432275     0     0     0     0      0     0   297  30404 10511  9  9 81  0 12:00:49
  2   0    5430035    2432339     0     0     0     0      0     0   132  27075  9550 11  6 82  0 12:00:50
 kthr          memory                         page                       faults           cpu       time  
------- --------------------- ------------------------------------ ------------------ ----------- --------
  r   b        avm        fre    re    pi    po    fr     sr    cy    in     sy    cs us sy id wa hr mi se
  3   0    5429956    2432416     0     0     0     0      0     0    54  23457  9651  6  4 90  0 12:00:51
  3   0    5429945    2432426     0     0     0     0      0     0   243  22004  9562  6  4 89  0 12:00:52
  2   0    5433131    2429240     0     0     0     0      0     0   244  25251  8972 19  5 76  0 12:00:53
  1   0    5433160    2429105     0     0     0     0      0     0   215  32130 10733  6  7 86  0 12:00:54
  4   0    5433165    2429099     0     0     0     0      0     0    57  25387  8959  5  5 90  0 12:00:55
  2   0    5433256    2429005     0     0     0     0      0     0   232  23550  9176 19  5 76  0 12:00:56
  3   0    5434183    2428078     0     0     0     0      0     0   137  26520  8972 32  6 62  0 12:00:57
  3   0    5433670    2428591     0     0     0     0      0     0   131  21800  8601 13  4 82  0 12:00:58
  1   0    5433670    2428590     0     0     0     0      0     0   204  38504 10179  8  7 85  0 12:00:59
  0   0    5433670    2428591     0     0     0     0      0     0   146  22071  9297  5  4 90  0 12:01:00
  1   0    5433671    2428588     0     0     0     0      0     0   111  22748  8940  6  4 90  0 12:01:01
  1   0    5433671    2428588     0     0     0     0      0     0    75  21475  9099  5  4 91  0 12:01:02
System configuration: lcpu=8 drives=10 paths=31 vdisks=1

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk2           3.0     192.0      12.0        192         0
hdisk3           2.0     160.0      10.0        160         0
hdisk1           3.0     336.0      21.0        320        16

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk1           1.0      64.0       4.0         64         0

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk2           1.0      48.0       3.0         48         0
hdisk0           3.0      40.0       8.0          0        40

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           1.0      28.0       7.0          0        28

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk1           1.0     161.0      25.0        112        49

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk2           1.0     144.0       9.0        144         0
hdisk1           1.0     228.0      16.0        208        20
hdisk0           1.0      28.0       7.0          0        28

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk1           1.0      80.0       5.0         64        16

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk1           1.0      80.0       5.0         64        16

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           1.0       4.0       1.0          0         4

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk1           3.0     256.0      16.0        240        16
hdisk0           2.0     1276.0     109.0          0      1276

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk2           1.0      36.0       3.0         16        20
hdisk1           1.0      96.0       6.0         80        16

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk1           1.0     112.0       7.0         96        16

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk2           1.0     116.0      11.0        112         4
hdisk1           1.0     224.0      14.0        208        16
hdisk0           1.0      40.0       5.0          0        40

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           1.0      80.0      20.0          0        80

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk1           2.0     208.0      13.0        192        16

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk1           1.0      66.0       7.0         48        18

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk1           1.0      80.0       5.0         64        16

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           1.0      60.0      15.0         40        20

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk2           1.0      16.0       1.0         16         0
hdisk3           1.0      32.0       2.0         32         0
hdisk1           1.0      96.0       6.0         96         0

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk2           1.0      64.0       4.0         64         0
hdisk0           5.0     860.0     215.0          0       860

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk3           1.0      16.0       1.0         16         0

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk2           9.0     9128.0      41.0       9104        24
hdisk3           8.0     8064.0      32.0       8064         0
hdisk1           9.0     13136.0      59.0      13104        32
hdisk0           1.0      28.0       7.0          0        28

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           4.0     412.0     103.0          0       412

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           1.0      28.0       7.0          0        28

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk1           1.0      64.0       5.0         48        16
hdisk0           4.0     2928.0     567.0          0      2928

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk2           1.0      80.0       5.0         80         0
hdisk1           1.0     160.0      10.0        144        16

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk6           1.0     320.0      68.0        320         0
hdisk7           1.0     320.0      68.0        320         0

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk2           8.0     208.0      14.0        208         0
hdisk3          11.0     240.0      15.0        240         0
hdisk1           8.0     368.0      23.0        352        16
hdisk0           2.0     332.0      83.0          0       332
          2323024746 total address trans. faults
               462190 page ins
              6958561 page outs
                    0 paging space page ins
                    0 paging space page outs
                    0 total reclaims
            856457822 zero filled pages faults
             93763059 executable filled pages faults
                    0 pages examined by clock
                    0 revolutions of the clock hand
                    0 pages freed by the clock
             22398240 backtracks
                    0 free frame waits
                    0 extend XPT waits
               236328 pending I/O waits
              7420752 start I/Os
              4288502 iodones
           3934680209 cpu context switches
             64041885 device interrupts
             62118494 software interrupts
           1923454125 decrementer interrupts
             24698541 mpc-sent interrupts
             24698541 mpc-receive interrupts
              1200083 phantom interrupts
                    0 traps
          13938141199 syscalls
              8388608 memory pages
              8058736 lruable pages
              2428935 free pages
                    1 memory pools
              1404618 pinned pages
                 90.0 maxpin percentage
                  3.0 minperm percentage
                 90.0 maxperm percentage
                  4.7 numperm percentage
               384661 file pages
                  0.0 compressed percentage
                    0 compressed pages
                  4.7 numclient percentage
                 90.0 maxclient percentage
               384661 client pages
                    0 remote pageouts scheduled
                    5 pending disk I/Os blocked with no pbuf
                    0 paging space I/Os blocked with no psbuf
                 2228 filesystem I/Os blocked with no fsbuf
                 1537 client filesystem I/Os blocked with no fsbuf
                 4155 external pager filesystem I/Os blocked with no fsbuf
                 66.5 percentage of memory used for computational pages
462190 page ins
6958561 page outs
...
856457822 zero filled pages faults

There is very little physical paging activity. What is happening a lot is the application is using memory for the first time and/or doing malloc.

Also, when physical i/o activity is taking place, you are writing 14x for each read

6958561/462190 (pages out / pages in)

Your high number is just showing what I hope is normal startup activity. You have lots of memory free - so file caching is easy (also explains why writes dominate read activity)

then i must consider that a normal activity, or i can configure the system to have minimum page faults?

you said that file caching is easy, can you explain better and eventually where i can find documentation that focus on this?

Thank you so much

p.s. i have opened a case with ibm/oracle and no one cannot solve/explain this problem..

just an idea...

Oracle Architecture and Tuning on AIX (v 2.30).pdf

Starting with Oracle 11.2.0.2 when AIX 6.1 or AIX 7.1 is detected, Oracle will use O_CIOR option to open a file on JFS2. 
Therefore you should no longer mount the filesystems with mount option �o cio.

When Oracle is open a file in cio-mode your system will not cache this data.
Maybe you need do define a greater SGA to reach a good cache hit:

SGA_MAX_SIZE
Starting with Oracle 9i, the Oracle SGA size can be dynamically changed. It means the DBA just 
needs to set the maximum amount of memory available to Oracle (SGA_MAX_SIZE) and the initial 
values of the different pools: DB_CACHE_SIZE, SHARED_POOL_SIZE, LARGE_POOL_SIZE etc� 
The size of these individual pools can then be increased or decreased dynamically using the ALTER SYSTEM 
command, provided the total amount of memory used by the pools does not exceed SGA_MAX_SIZE. If LOCK_SGA = TRUE, 
his parameter defines the amount of memory Oracle allocates at DB startup in �one piece�! Also, 
SGA_TARGET is ignored for the purpose of memory allocation in this case.

PS: Our DB-Mashine runs "normal" with 30.000 Page Faults/sec

zero filled page faults is an application behavior (e.g., malloc). Very little a system administrator can do.

File caching is automatic with AIX. Starting with AIX 6.1 the defaults for file caching behavior are pretty to very good.

              8388608 memory pages
              8058736 lruable pages
              2428935 free pages

24/84 % memory free
80/94 % memory demand pageable (least recently used "able")

              1404618 pinned pages
               384661 file pages
               384661 client pages
                 66.5 percentage of memory used for computational pages

4/84 % file caching (client == jfs2 files, files = jfs + jfs2, so (nearly) all files are jfs2
14/84 % memory pinned
computational memory is all working memory (application memory, plus program code (text))

                    5 pending disk I/Os blocked with no pbuf
                    0 paging space I/Os blocked with no psbuf
                 2228 filesystem I/Os blocked with no fsbuf
                 1537 client filesystem I/Os blocked with no fsbuf
                 4155 external pager filesystem I/Os blocked with no fsbuf

Above tells me:

  • little raw disk i/o being done and/or sufficient buffers
  • no paging space activity
  • some jfs activity, and initial fs buffers could be tuned to higher number
  • some real nfs/cd/dvd activity needing more buffers
  • same for jfs2
    In short, file system buffering should be checked.

ioo -a
nfso -a

commands will list the tunable variables.

thanks for all the replays, but i cannot understand 1 things.

why aix tell me there is page faults when all disks are running in asm mode and no one are running with raw/jfs etc...

the vmm take count of disk not formatted with unix filesystem also or take count only with know filesystem.

thanks again
Giulio

"Paging" in AIX stands for any physical i/o. A page fault is simplyy when an application wants to access something in memory that is not there.

A "zero-page" fault is simply the first time a page (generally a 4k concept, but tuning can make frames (the in core page concept) be allocated in 64k, or much larger (16M byte iirc). So, for a zero-page fault, no physical i/o is needed when there is free memory available (as is your case).

Way back when, AIX was the the first UNIX to use "free memory" for file caching, rather than a set number of buffers. This has made dicusssions about paging difficult. On older UNIX (as I do not know if Solaris/HPUX/other *NIX, or Linux use file caching like AIX) the term paging referred to what AIX calls "paging space" i/o.

If I were to visit on-site to look at your situation I would look at, among other things, vmstat -I to see paging space and file paging activity.

Again, from the data above Ido not see a reason to link there is a physical i/o problem. ANd when no physical i/o is needed the paging stats can get quite high.

What would help me, is for your to rephrase your problem - or question. JUst to make sure wee are on the same page, as it were. Statistics alone can be very misleading/confusing.

---------- Post updated at 03:43 PM ---------- Previous update was at 03:34 PM ----------

                 2228 filesystem I/Os blocked with no fsbuf 
                 1537 client filesystem I/Os blocked with no fsbuf 
                 4155 external pager filesystem I/Os blocked with no fsbuf

My "feeling" regarding these statistics is that the system has some initial start up issues, because the fsbufs are too small initially - but AIX is automatically expanding them.

IIRC the AIX/Oracle document referenced above also talks about how to cause AIX to increase these buffers faster (look for the command ioo in the document).

Note: I could type all the commands here, but that is, imho, a false sense of security - as if all is covered. What I hope is that after reading you will have a new question or questions - and from your new questions I will also better understand your problem. In short, I believe in dialog, not "This is it...", or "wave a magic wand" kind of answers.