Hi, I'm a graduate student at The State University of New York at Buffalo currently majoring in Computer Science. Prior to this I have done my undergraduate studies in the field of Information Technology from University of Pune. I have worked at Amdocs, Inc. as a Software Engineer for the term June 2014-15
Hi, I love Software Development. I have had a professional experience as a Software Engineer at Amdocs, Inc. where I worked as a Java backend developer. I have worked on building Enterprise Applications as well tiny chunks of modules. Along with this we also wrote Shell Scripts in Linux for automation and SQL queries to extract resourceful data. I have a strong knowledge of Object Oriented Languages such as Java, Python and C++. I also enjoy working on the front-end languages such as HTML5, CSS3, JavaScript and jQuery
Knowing programming languages has its advantages, but when we have tools to perform the same tasks why not use them? As a part of one of my projects I have worked on Hadoop and MapReduce. I also got to work on Apache Solr as this was used to index tweets in our Search Engine for tweets project. For a project requirement in Amdocs, Inc. I also had to quickly learn one of their leagacy data management tool Master Enterprise Catalog to implement a few functionalities. Apart from this I also have a working experience in Maven, Perforce, Git, Apache Subversion (SVN) and ActiveVOS
In my free time I love listening to music. I love Alternative Rock, House and Indian Pop genres. I also regularly follow many TV shows, among which Sci-Fi and Sitcoms are my personal favorite. I like reading Comic Books and I can very easily get lost in the Marvel and DC Universe. I enjoy travelling and exploring new places. While doing so I relish the regional cuisines. I'm a big foodie and I also love to cook my own meal.
For this project we indexed around 10,000 tweets in Apache Solr. Since this was a multi-lingual search system the tweets consisted of various languages such as English, German, Russian, French and Arabic. Based on the user query the search engine is able to retrieve and provide the user with the most relevant data. Along with tweets the serach engine also provided analysis of the retrieved data to the user. The summary of search query which is displayed at the top. Content of this summary is taken from Wikipedia. We provided faceting options for the user to segregate the tweets based on language, location, (positive, negetive or neutral). We also implemented Graphical analysis of results based on the location and organization.
To see a DEMO of this click on demo.
Apache Solr Java HTML5 JavaScript CSS3 jQuery Alchemy API MediaWiki
A prototype of Amazon Dynamo like DB implemented in a distibuted Android environment. Implemented a set of methods which when put together make this system a highly-available structured storage. It has the properties of both a database to store key-values as well as a distibuted hash table (DHTs). The features provided in this storage primarily involved Partitioning, Chain Replication and Failure Handling. Connection between various android AVDs done using TCP sockets. Major focus given on correctness rather than performance.
Android Java Sockets
Implemented K Means clustering on Genomic Data using Map Reduce in Hadoop (Single Node) to find out similar genes based on expression values. Once clustering was done, calculated external index using the Jaccard Coefficient. The original data set and the clustering results obtained were visualized by doing Principal Component Analysis (PCA) in python using the library sklearn.decomposition.PCA.
Java Hadoop MapReduce sklearn.decomposition.PCA Python
The Hough Transform for circle determination algorithm was used in this project in order to determine circles in a given image. The accuracy of this algorithm is found to be 100%. Performance of this algorithm is a bit slow because of the use of a 3D accumulator array. The circles were also determined using the Randomized Circle Detection (RCD) algorithm given in this paper. This algorithm does not use a accumulator array and hence occupies less space and performes faster than the traditional approach. The accuracy is however not 100%.
Python NumPy OpenCV
Social media offers people a good platform to express their feelings about any current issues. During the 2016 presidential campaigns, people took to Twitter to showcase their views about all the candidates. To analyze this, I build an app in R-Shiny which performed tasks such as gathering raw tweets using the Twitter API and then performing exploratory data analysis to extract useful information from them. Sentiment analysis was carried out on the fetched tweets. Also a provision was made to call the API and gather some live tweets while the App was running so that analysis can be performed on them as well. A demo of the app can be found here
R R-Shiny twitteR tm-Text Mining sentiment-R
An image processing project to merge adjacent regions. The approach is simple, first merge the adjacent neighboring pixels to form regions and then merge the adjacent regions to form objects. The merging of adjacent regions was done on the basis of homogeneity criteria - gray level internsity of pixels.
Python NumPy OpenCV
Database project built for agency distributing bulk and huge number of advertising pamphlets etc. in country
which also provides facilities for tracking orders and managing schedules.
Oracle 11g VB6
Enterprise application build in Amdocs, Inc. This application was basically a Order Capturing and Management tool that can be used by other organizations in order to keep a track of current VoIP and MIS orders. I played a role as a back-end developer in this project. Exposed and consumed SOAP Web Services. Worked in building different process flows to carry out automation. Wrote shell scripts for building and deploying releases. Also queries to extract data as per client needs.
Apache Solr Java SOAP Maven Junit ActiveVOS Perforce Apache Subversion(SVN) Oracle 11g Linux
A research based project which aimed at restructing the architecture of traditional Hadoop filesystem. The target was to scale the size of NameNode using a cache like approach. We set up a Hadoop cluster consisting of three DataNodes and one NameNode. All the analysis was carried out on this cluster. We set a threshold value based on which data present in the NameNode was partitoned and the unused metadata can be moved to secondary memory. A research paper based on this is published in IEEE Xplore Digital Library.. To take a view at this paper click here
Hadoop MapReduce Java Maven Linux
33 Tyler Street, Upper, Buffalo, NY-14214
+1 (716) 970-8713