Oracle performance optimum vs. SGA memory allocation

Dear all experts,
I have a p750 Power 7 3.3GHz server with 4 processors and 48GB Memory. This is a DB server which is using Oracle 9i.
I have been told that Oracle 9i can only allocate 10GB as SGA Max to get the oracle optimum performance. Anything more will result in overflow of memory and will impact oracle slowness.
I would like to know is the above statement a correct statement that Oracle 9i cannot take more memory when I have so much to offer?
I do always think that the more memory we have, the faster the server running but it seems not the same case in this situation.
May I have experts view of this situation so that I can have the server to be run much more performance sound and utilised as many of memory as possible.

Below is the initABCHOST.ora

*._kgl_latch_count=34
*.aq_tm_processes=1
*.audit_trail='DB'
*.background_dump_dest='/app/oracle/admin/ABCHOST/bdump'
*.compatible='9.2.0.1.0'
*.control_file_record_keep_time=7
*.control_files='/data05/HLAPROD/control01.ctl','/data10/ABCHOST/control02.ctl'
*.core_dump_dest='/app/oracle/admin/ABCHOST/cdump'
*.cpu_count=4
*.cursor_sharing='FORCE'
*.db_block_size=8192
*.db_cache_advice='ON'
*.db_cache_size=3674210304
*.db_domain=''
*.db_file_multiblock_read_count=32
*.db_files=300
*.db_keep_cache_size=1572864000
*.db_name='HLAPROD'
*.db_writer_processes=4
*.dispatchers='(PROTOCOL=TCP) (SERVICE=ABCHOSTXDB)'
*.dml_locks=4860
*.enqueue_resources=5124
*.fast_start_mttr_target=300
*.hash_join_enabled=TRUE
*.instance_name='ABCHOST'
*.java_pool_size=15286400
*.job_queue_processes=10
*.large_pool_size=576777216
*.log_buffer=2048000
*.log_checkpoint_interval=10000000
*.log_checkpoint_timeout=0
*.log_checkpoints_to_alert=TRUE
*.max_enabled_roles=120
*.open_cursors=2000
*.optimizer_index_caching=70
*.optimizer_index_cost_adj=100
*.pga_aggregate_target=3221225472
*.processes=1500
*.query_rewrite_enabled='TRUE'
*.remote_dependencies_mode='SIGNATURE'
*.remote_login_passwordfile='EXCLUSIVE'
*.resource_limit=TRUE
*.session_cached_cursors=500
*.sga_max_size=10737418240
*.shared_pool_reserved_size=247815065
*.shared_pool_size=1503013120
*.sort_area_size=4194304
*.star_transformation_enabled='FALSE'
*.timed_statistics=TRUE
*.transactions=2000
*.undo_management='AUTO'
*.undo_retention=10000
*.undo_suppress_errors=TRUE
*.undo_tablespace='UNDOTBS2'
*.user_dump_dest='/app/oracle/admin/ABCHOST/udump'
*.utl_file_dir='/data06/utlfiles'
*.workarea_size_policy='AUTO'

Thanks.

You should not size the SGA according to what you have in the box but according to what your DB needs. If your SGA is too big this can dramatically slow down performance. If it is not big enough, the same is true. And apart from an SGA you do have a PGA too - sometimes it makes more sense to give more memory to the PGA instead of extending the SGA and so on.

Oracle is a thread based DB, so how much resources you need obviously depends on how many connections you have in parallel - a box with 20 connections per day needs a lot less than a box with 3000 connections in parallel - each of those connections is forking a process, each of those processes can use up to (size of one pp in the corresponding VG) memory - and if your box is doing anything else apart from the DB - and what kind of load your DB has - Data warehouses have a very different utilization pattern than Traders for example. And are you doing EOD batches, what kind of backups are you running, how big are your tables and so on.

Regards
zxmaus