Here they some of the key terms: 1. Algorithm: A mathematical formula or statistical process run by software to perform an analysis of data. It usually consists of multiple calculations steps and can be used to automatically process data or solve problems. 2. Amazon Web Services: A collection of cloud computing services offered by Amazon to help businesses carry out large scale computing operations (such as big data projects) without having to invest in their own server farms and data storage warehouses. Essentially, Storage space, processing power and software operations are rented rather than having to be bought and installed from scratch. 3. Analytics: The process of collecting, processing and analyzing data to generate insights that inform fact-based decision-making. In many cases it involves software-based analysis using algorithms. For more, have a look at my post: What the Heck is… Analytics 4. Big Table: Google’s proprietary data storage system, which it uses to host, among other things its Gmail, Google Earth and Youtube services. It is also made available for public use through the Google App Engine. 5. Biometrics: Using technology and analytics to identify people by one or more of their physical traits, such as face recognition, iris recognition, fingerprint recognition, etc. For more, see my post: Big Data and Biometrics 6. Cassandra: A popular open source database management system managed by The Apache Software Foundation that has been designed to handle large volumes of data across distributed servers. 7. Cloud: Cloud computing, or computing “in the cloud”, simply means software or data running on remote servers, rather than locally. Data stored “in the cloud” is typically accessible over the internet, wherever in the world the owner of that data might be. For more, check out my post: What The Heck is… The Cloud? 8. Distributed File System: Data storage system designed to store large volumes of data across multiple storage devices (often cloud based commodity servers), to decrease the cost and complexity of storing large amounts of data. 9. Data Scientist: Term used to describe an expert in extracting insights and value from data. It is usually someone that has skills in analytics, computer science, mathematics, statistics, creativity, data visualisation and communication as well as business and strategy. 10. Gamification: The process of creating a game from something which would not usually be a game. In big data terms, gamification is often a powerful way of incentivizing data collection. For more on this read my post: What The Heck is… Gamification? 11. Google App Engine: Google’s own cloud computing platform, allowing companies to develop and host their own services within Google’s cloud servers. Unlike Amazon’s Web Services, it is free for small-scale projects. 12. HANA: High-performance Analytical Application – a software/hardware in-memory platform from SAP, designed for high volume data transactions and analytics. 13. Hadoop: Apache Hadoop is one of the most widely used software frameworks in big data. It is a collection of programs which allow storage, retrieval and analysis of very large data sets using distributed hardware (allowing the data to be spread across many smaller storage devices rather than one very large one). For more, read my post: What the Heck is… Hadoop? And Why You Should Know About It 14. Internet of Things: A term to describe the phenomenon that more and more everyday items will collect, analyse and transmit data to increase their usefulness, e.g. self-driving cars, self-stocking refrigerators. For more, read my post: What The Heck is… The Internet of Things? 15. MapReduce: Refers to the software procedure of breaking up an analysis into pieces that can be distributed across different computers in different locations. It first distributes the analysis (map) and then collects the results back into one report (reduce). Several companies including Google and Apache (as part of its Hadoop framework) provide MapReduce tools. 16. Natural Language Processing: Software algorithms designed to allow computers to more accurately understand everyday human speech, allowing us to interact more naturally and efficiently with them. 17. NoSQL: Refers to database management systems that do not (or not only) use relational tables generally used in traditional database systems. It refers to data storage and retrieval systems that are designed for handling large volumes of data but without tabular categorisation (or schemas). 18. Predictive Analytics: A process of using analytics to predict trends or future events from data. 19. R: A popular open source software environment used for analytics. 20. RFID: Radio Frequency Identification. RFID tags use Automatic Identification and Data Capture technology to allow information about their location, direction of travel or proximity to each other to be transmitted to computer systems, allowing real-world objects to be tracked online. 21. Software-As-A-Service (SAAS): The growing tendency of software producers to provide their programs over the cloud – meaning users pay for the time they spend using it (or the amount of data they access) rather than buying software outright. 22. Structured v Unstructured Data: Structured data is basically anything than can be put into a table and organized in such a way that it relates to other data in the same table. Unstructured data is everything that can’t – email messages, social media posts and recorded human speech, for example. As always, I hope you found this post useful? For more, please check out my other posts in The Big Data Guru column and feel free to connect with me via social channels.