Thanks to Ramanujan’s effort, one can now access Sastras through the computer.
In the late 1920s, Ghanapathi Parankusachar Swami won a prize in Sanskrit. When asked whether he wanted the prize of Rs 3,000 in cash or kind, he asked for books! Thus he acquired a wonderful library. This enabled his son Ramanujan to pore over the books every day.
Ramanujan spent seven years putting the contents of the Sastras into a database. He culled 30,000 sutras from all the Sastras, classified the different aspects of the Sastras, and gave his compendium the name, Sakala Sastra Sutra Kosa.
When a retired professor of Physics from IIT Madras, who became a sanyasi after being initiated by Sringeri Pontiff, Paramananda Bharati, organised a conference in Delhi on Sanskrit and Computers, Ramanujan told him about the kosa and was asked to present a paper at the conference.
The paper was on using computers for Sanskrit. Many IIT professors were present and what caught their attention was that Ramanujan had come up with a flow chart in Sanskrit, and a programme for the generation of nouns. The then President of India, Dr. Shankar Dayal Sharma, was so impressed that he suggested that Dr. Bhatkar- founder director of Centre for Development of Advanced Computing (C-DAC) – make use of Ramanujan’s services. In 1990, Ramanujan joined C-DAC, Pune. While in Pune, Ramanujan developed DESIKA, a comprehensive package for generating and analysing Sanskrit words.
What does DESIKA do? “Given a Sanskrit word, it gives you the hidden meanings, the meanings with which it is packed. Key in a word and DESIKA gives you the noun attributes like paradigm, ending type, noun base, number and case, and similarly for verbs.”
When Ramanujan joined C-DAC, their ISCII standard was in the testing stage. Ramanujan wrote the Vedic part of the standard.
Around this time, a question was raised in Parliament about what Indian scientists were doing in the field of Computers and Sanskrit. Ramanujan was asked to make a presentation in Parliament. He presented DESIKA, and later gave a demo in the Parliament annexe. The then Prime Minister P.V. Narasimha Rao, who held the Science and Technology portfolio, attended the demo and was amazed at the simplicity of DESIKA.
Ramanujan made a second presentation in Parliament in 1993. The question now was about how to handle differences between Vedic and classical Sanskrit. Ramanujan replied that this would pose no problems, and showed a 73 by 26 matrix, which he had prepared (73 individual characters in the Vedic part and 26 parameters). For every Vedic syllable, there are three components- consonant, vowel and accent, and each syllable has 26 parameters, which define it fully.
In 1994, C-DAC began work on Vedic fonts and today, all the Vedas have been rendered machine readable. Searchable, analysable Sastraic contents, Itihasas, Puranas, Divya Prabandham are all now available too, with value added features such as retrieval as word, stem, compounds, including Boolean search. You can use the same keyboard layout for any script.
Ramanujan entrusted to students of Veda Pathasalas, the task of typing out old texts. “One lakh pages have been typed, and 600 texts covered. But the task of annotation still remains, because there are not enough knowledgeable people to do the job.”
Aren’t people who study for many years in pathasalas competent to do this? “Not necessarily. Most of the pathasalas concentrate on rote learning. I feel we can dilute the memorising part and concentrate on analysis. We need to make this kind of study monetarily attractive as well.”
Ramanujan was the Principal Investigator for the TARKSHYA (Technology for Analysis of Rare Knowledge Systems for Harmonious Youth Advancement) project, which envisages providing Sanskrit institutions across the country with high speed connectivity, for promoting heritage computing activities. Content has also been developed for online study. Three courses have been designed: Vedic processing, Sastras and manuscript processing. “We have video lectures by 40 scholars. Students can access the lectures through their mobiles. If a student wants to search something later, he can do so, for a verbatim transcript is available.”
For manuscript processing, a computer application program, called Pandu-lipi Samshodaka has been developed by C-DAC, which has browse, search, index, analyse and hyperlinking features.
Ramanujan takes me round his library, which has many rare manuscripts, some of them more than 400 years old. They have all been digitised. He feels students must seek out old manuscripts, for who knows what treasures lie hidden in them?
How can we tweak education for students of traditional learning? “A student of Indian logic should study Western logic too. A student of vyakarana must study modern theories of linguistics. Study should be interdisciplinary- mathematics in ancient Sanskrit texts and in modern texts; transdisciplinary- that is different areas within Sanskrit such as vyakarana, mimamsa, nyaya; multi disciplinary- a student of ayurveda could perhaps study the therapeutical aspects of music.”
Helpful for scholars
Ramanujan has a website parankusa in which he gives the Arsheya system for the Krishna Yajur Veda. This is a topical arrangement of contents. What is actually followed today is the Saarasvatha system, which does not have such an ordering. Giving the Arsheya system alongside the Saarasvatha ordering, has been of great help to many Sanskrit scholars.
Proving the compatibility of Science and Sastras, Dr. P. Ramanujan headed a project on ‘Computational Rendering of Paninian Grammar’
In the early 1900s, analytic philosophers such as Russell and initially Wittgenstein too, tried to develop artificial languages, which, unlike ordinary language, would provide them with a more logical grammar, and words with unambiguous meanings. Language was a major preoccupation for later analytic philosophers such as Austin too, although he felt ordinary language itself would serve the purpose of the philosopher.
Talking about generative grammar, linguist Noam Chomsky said that grammar books do not show how to generate even simple sentences, without depending on the implicit knowledge of the speaker. He said this is true even of grammars of “great scope” like Jespersen’s ‘A Modern English Grammar on Historical Principles.’ There is some “unconscious knowledge” that makes it possible for a speaker to “use his language.” This unconscious knowledge is what generative grammar must render explicit. Chomsky said there were classical precedents for generative grammar, Panini’s grammar being the “most famous and important case.”
Walter Eugene Clark, who was Professor of Sanskrit at Harvard University, and who translated Aryabhatta’s Aryabhatiya into English, wrote that “Panini’s grammar is the earliest scientific grammar in the world, and one of the greatest.” He said the “Indian study of language was as objective as the dissection of the body by an anatomist.”
Not surprisingly, there are scientists who study Paninian grammar, with a view to seeing what application they have in the area of Natural Language Processing (NLP) research.
Dr. P. Ramanujan, Programme Co-ordinator, Indian Heritage Group- C-DAC, Bengaluru, is an authority on Paninian grammar. With a tuft, a namam on his forehead and a traditional dhoti, he doesn’t look like a typical scientist. Ramanujan is proof that traditional education need not stand in the way of a career in science, for it is his traditional learning which has brought him to where he is today.
Trained from the age of three by his father, Ghanapadi Parankusachar Swami, Ramanujan completed his study of the 4000 verses of the Divya Prabandham by the age of 11. After his upanayanam, Vedic studies began. But he also had to go to regular school, so that he had an almost 24-hour academic engagement, studying one thing or the other.
A brilliant student, Ramanujan wanted to become an engineer. But his father wanted him to take up a job soon, and so suggested he do a diploma course. After obtaining his diploma, Ramanujan joined HAL. Later on, he graduated in engineering, and did his Masters in Engineering from IISc, where his thesis was on Development of a General Purpose Sanskrit Parser.
What would make a study of Sanskrit useful to a student of Computer Science? “If a language has many meanings for a word, it is ambiguous, but when Sanskrit has many meanings for a word, it is rich!” says Dr. Ramanujan, who headed a project on ‘Computational Rendering of Paninian Grammar.’
The richness of Sanskrit comes from the fact that everything is pre-determined and derivable. “There is a derivational process, and so there is no ambiguity. You can explain everything structurally. There is a base meaning, a suffix meaning and a combination meaning. The base is the constant part, and the suffix is the variable part. The variables are most potent. With suffixes one can highlight, modify or attenuate.”
Two different words may denote an object, but you can’t use them interchangeably, for the functional aspect is what matters. For example you can’t replace ‘Agni’ with ‘Vahni,’ for ‘Agni’ has its own componential meaning.
An object may be denoted by the base. An object can have sets of relationships and interactions with other things in the world. For example, ‘Rama’, in relation to other objects, may be an agent of some activity or the recipient etc. “Even the interactions have been codified nicely and briefly. Clarity and brevity are the hallmarks of Panini’s work. His rule-based approach is his biggest plus point.”
Isn’t it true that in Sanskrit you don’t have to coin words for a new invention or discovery, and you can derive a word to suit the functionality of the object? “Yes. You have all the components with you to derive a word.
You can use multiple suffixes, if need be, to show the particular function of an object.”
Does meaning vary according to accent? “It does. For the same suffix, different meanings are derivable because of accent differences. So you have the Divine Couple, Jaganmatha and Jagathpitha. How do you show the difference between our parents for all time and our parents in this life alone? Accent helps here. This is how the Vedas are most apt, and this has been fully noted by Panini. “He gave us a conceptual, functional system. You take an example, apply the rules and get clarity about what it means. So the structure is important. The component approach is important.”
Wasn’t there an occasion when the work of a Finnish scholar, who found fault with Panini, was referred to you ? “The Finnish scholar said that Panini was wrong in some rules relating to Vedic grammar. ‘Let Lakaara’ is used only in the Vedas, and Panini wrote five sutras for it. The Finnish scholar felt Panini could have handled this differently. George Cardona, from the University of Pennsylvania, referred him to me. I pointed out that Panini cannot be faulted internally. After all he set out a meta language first. He said this is how I will write my rules. Externally, if you want, write a grammar yourself. Many have tried and no one has been able to better Panini.”
Have you included ‘Let Lakaara’ in your programs? “Yes, I have. ‘Let Lakaara’ is very tough, because 108 forms can be generated theoretically for every root. N.S. Devanathachariar, Mimamsa Professor in Tirupati, appreciated my work.”
However, Dr. Bachchu Lal Awasthi, a Presidential awardee and a grammarian, felt that only as many forms as occur in the Vedas should be generated. His objection was that one should use the Sutras to understand what existed, but one should not use the Sutra to generate the rest.
When Ramanujan explained that his program was done mainly to show how the rules worked, Dr. Awasthi conceded that Ramanujan did have a point. “This just shows that people can be won over, if we are able to show the purpose of something.”