Hello, Hope you are doing good.
We have an immediate opening for the below position, kindly let me know your interest with your updated resume at *[email protected]* <[email protected]> *NO H1B !!* *We have a new role for a Data Engineer contractor * *Duration: 6-12 months* *Skills Required: Strong Python (must be able to pass a coding test in Python) AND strong SQL skills. Must have depth in both.* *Skype or Zoom Interview OK,* *Must relocate and work onsite in Seattle, WA* *Data Engineer* We’re looking for a Data Engineer to help us transform our data systems and architecture to support greater variety, volume, and velocity of data and data sources. You might be a good fit if: • You enjoy extracting data from a variety of sources and find ways to connect them and make them suitable for use in software systems and for the development of models and algorithms. • You enjoy interacting with new database systems and learning new data technologies and are interesting in developing your knowledge of new tools and techniques. • You are interested in automating data engineering efforts to minimize human interaction and optimizing data quality. • You have an interest in developing your knowledge of practical data science techniques and technologies in addition to your data engineering knowledge and experience. This role requires comprehensive data engineering skills and is not a SQL developer role though SQL is a required skill. *Responsibilities:* We’re looking for an experienced data engineer to help us: · Build and Maintain serverless data ingestion and refresh pipelines in terabyte scale using *AWS cloud services – Amazon Glue, Amazon Redshift, Amazon S3, Amazon Athena, DynamoDB, and others* · Incorporate new data sources from external vendors using flat files, APIs, web-scraping, and databases. · Maintain and provide support for the existing data pipelines using *Python, Glue, Spark, and SQL* · Work to develop and enhance the database architecture of the new analytic data environment that includes recommending optimal choices between relational, columnar, and document databases based on requirement · Identify and deploy appropriate file formats for data ingestion into various storage and/or compute services via Glue for multiple use cases · Develop real-time/near real-time data ingestion from web and web service logs from Splunk · Maintain existing processes and develop new methods to match external data sources to Homesite data using exact and fuzzy methods · Implement and use machine learning based data wrangling tools like Trifacta to cleanse and reshape 3rd party data to make suitable for use. · Develop and implement tests to ensure data quality across all integrated data sources. · Serve as internal subject matter expert and coach to train team members in the use of distributed computing frameworks for data analysis and modeling including AWS services and Apache projects *Qualifications:* · Master’s degree in Computer Science, Engineering, or equivalent work experience · Two to four years’ experience working with datasets with hundreds of millions of rows using a variety of technologies · Intermediate to expert level programming experience in Python and SQL in Windows and* Mac/Linux environment* · Intermediate level experience working with distributed computing frameworks, especially Spark *Thanks&Regards,* *Bharat Chhibber* *Email:* *Bharat**@votoconsulting.com* <[email protected]> *Phone #: 201-331-6935* *15 Barbara St,Kendall Park,New Jersey 08824* <https://maps.google.com/?q=15+Barbara+St,Kendall+Park,New+Jersey+08824&entry=gmail&source=g> *http://www.votoconsulting.com* <http://www.votoconsulting.com/> -- You received this message because you are subscribed to the Google Groups "Android Developers" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/android-developers/CAEmgVe2sDVPZRKSgXP%2B_Kq%3Dm5zdpK9U58h12YVO7XDqmKnZ2Lg%40mail.gmail.com.

