deleting dupes in a row

Hello,
I have a large database in which name homonyms are arranged in a row. Since the database is large and generated by hand, very often dupes creep in. I want to remove the dupes either using an awk or perl script.
An input is given below

The expected output is given below:

As can be seen all the dupes are cleaned out.
At present I am using a macro which converts row to line, sorts and deleted dupes and restores the row structure. Since the database is huge, the macro takes a very long time.
Many thanks in advance for a speedy solution

$ awk -F= '{for(i=1;i<=NF;i++){a[$i]++;}for(i in a){b=b"="i}{sub("=","",b);$0=b;b="";delete a}}1' OFS=\= input.txt
md=mohammed=mohd=muhammed=muhmd
mahndra=mahendra=mahendera

---------- Post updated at 10:09 AM ---------- Previous update was at 10:04 AM ----------

$ perl -F= -lane 'print join "=",keys %{{ map { $_ => 1 } @F}};' input.txt
muhammed=muhmd=md=mohammed=mohd
mahendra=mahendera=mahndra
1 Like

Many thanks for both the solutions. I will test them out and get back on the forum

---------- Post updated at 01:53 AM ---------- Previous update was at 01:47 AM ----------

Many thanks. Both solutions work perfectly.