Based on materials from research&trends

Big Data has been the talk of the IT and marketing press for several years now. And it’s clear: digital technologies permeated the life of modern man, “everything is written.” The volume of data on various aspects of life is growing, and at the same time the possibilities for storing information are growing.

Global technologies for storing information

Source: Hilbert and Lopez, `The world's technological capacity to store, communicate, and compute information,` Science, 2011 Global.

Most experts agree that accelerating data growth is an objective reality. Social networks, mobile devices, data from measuring devices, business information - these are just a few types of sources that can generate gigantic volumes of information. According to the study IDCDigital Universe, published in 2012, in the next 8 years the amount of data in the world will reach 40 ZB (zettabytes), which is equivalent to 5200 GB for every inhabitant of the planet.

Growth of digital information collection in the US


Source: IDC

Much of the information is created not by people, but by robots interacting both with each other and with other data networks, such as sensors and smart devices. At this rate of growth, the amount of data in the world, according to researchers, will double every year. The number of virtual and physical servers in the world will increase tenfold due to the expansion and creation of new data centers. As a result, there is a growing need to effectively use and monetize this data. Since using Big Data in business requires considerable investment, you need to clearly understand the situation. And it is, in essence, simple: you can increase business efficiency by reducing costs and/or increasing sales volume.

Why do we need Big Data?

The Big Data paradigm defines three main types of problems.

  • Storing and managing hundreds of terabytes or petabytes of data that conventional relational databases cannot effectively utilize.
  • Organize unstructured information consisting of texts, images, videos and other types of data.
  • Big Data analysis, which raises the question of ways to work with unstructured information, generation of analytical reports, as well as the implementation of predictive models.

The Big Data project market intersects with the business analytics (BA) market, the global volume of which, according to experts, amounted to about $100 billion in 2012. It includes components network technologies, servers, software and technical services.

Also, the use of Big Data technologies is relevant for income assurance (RA) class solutions designed to automate the activities of companies. Modern systems income guarantees include tools for detecting inconsistencies and in-depth data analysis, allowing timely detection of possible losses or distortion of information that could lead to a decrease financial results. Against this background, Russian companies, confirming the presence of demand for Big Data technologies in the domestic market, note that the factors that stimulate the development of Big Data in Russia are data growth, acceleration of management decision-making and improvement of their quality.

What prevents you from working with Big Data

Today, only 0.5% of accumulated digital data is analyzed, despite the fact that there are objectively industry-wide problems that could be solved using Big Data class analytical solutions. Developed IT markets already have results that can be used to evaluate expectations associated with the accumulation and processing of big data.

One of the main factors that slows down the implementation of Big Data projects, in addition to high cost, is considered problem of selecting processed data: that is, determining which data needs to be retrieved, stored and analyzed, and which should be ignored.

Many business representatives note that difficulties in implementing Big Data projects are associated with a lack of specialists - marketers and analysts. The speed of return on investment in Big Data directly depends on the quality of work of employees engaged in in-depth and predictive analytics. The enormous potential of data already existing in an organization often cannot be effectively used by marketers themselves due to outdated business processes or internal regulations. Therefore, Big Data projects are often perceived by businesses as difficult not only to implement, but also to evaluate the results: the value of the collected data. The specific nature of working with data requires marketers and analysts to switch their attention from technology and creating reports to solving specific business problems.

Due to the large volume and high speed of data flow, the process of data collection involves ETL procedures in real time. For reference:ETL – fromEnglishExtract, Transform, Load- literally “extracting, transforming, loading”) - one of the main processes in management data warehouses, which includes: data extraction from external sources, their transformation and cleaning to meet needs ETL should be viewed not only as a process of moving data from one application to another, but also as a tool for preparing data for analysis.

And then the issues of ensuring the security of data coming from external sources must have solutions that correspond to the volume of information collected. Since Big Data analysis methods are developing only following the growth in data volume, the ability of analytical platforms to use new methods of preparing and aggregating data plays a big role. This suggests that, for example, data about potential buyers or a massive data warehouse with the history of clicks on online shopping sites may be of interest for solving various problems.

Difficulties don't stop

Despite all the difficulties with the implementation of Big Data, the business intends to increase investments in this area. As follows from Gartner data, in 2013, 64% of the world's largest companies have already invested, or have plans to invest in deploying Big Data technologies for their business, while in 2012 there were 58%. According to Gartner research, the leaders in industries investing in Big Data are media companies, telecoms, banking and service companies. Successful results from the implementation of Big Data have already been achieved by many major players in the retail industry in terms of the use of data obtained using radio frequency identification tools, logistics and relocation systems. replenishment- accumulation, replenishment - R&T), as well as from loyalty programs. Successful retail experience encourages other market sectors to find new ones effective ways monetization of big data to turn its analysis into a resource that works for business development. Thanks to this, according to experts, in the period until 2020, investments in management and storage will decrease per gigabyte of data from $2 to $0.2, but for the study and analysis of the technological properties of Big Data will increase by only 40%.

The costs presented in various investment projects in the field of Big Data are of a different nature. Cost items depend on the types of products that are selected based on certain decisions. The largest part of the costs in investment projects, according to experts, falls on products related to the collection, structuring of data, cleaning and information management.

How it's done

There are many combinations of software and hardware, which allow you to create effective Big Data solutions for various business disciplines: from social media and mobile applications, to business data mining and visualization. An important advantage of Big Data is the compatibility of new tools with databases widely used in business, which is especially important when working with cross-disciplinary projects, such as organizing multi-channel sales and customer support.

The sequence of working with Big Data consists of collecting data, structuring the information received using reports and dashboards, creating insights and contexts, and formulating recommendations for action. Since working with Big Data involves large costs for collecting data, the result of processing of which is unknown in advance, the main task is to clearly understand what the data is for, and not how much of it is available. In this case, data collection turns into a process of obtaining information exclusively necessary for solving specific problems.

For example, telecommunications providers aggregate a huge amount of data, including geolocation, which is constantly updated. This information may be of commercial interest to advertising agencies, who may use it to deliver targeted and local advertising, as well as to retailers and banks. Such data can play an important role when deciding to open a retail outlet in a certain location based on data about the presence of a powerful targeted flow of people. There is an example of measuring the effectiveness of advertising on outdoor billboards in London. Now the reach of such advertising can only be measured by placing people near advertising structures with a special device that counts passers-by. Compared to this type of advertising effectiveness measurement, mobile operator much more possibilities - he knows exactly the location of his subscribers, he knows their demographic characteristics, gender, age, marital status, etc.

Based on such data, in the future there is the prospect of changing the content of the advertising message, using the preferences of a particular person passing by the billboard. If the data shows that a person passing by travels a lot, then he could be shown an advertisement for a resort. The organizers of a football match can only estimate the number of fans when they come to the match. But if they had the opportunity to request from the operator cellular communications information about where visitors were an hour, a day or a month before the match, this would give organizers the opportunity to plan places to advertise the next matches.

Another example is how banks can use Big Data to prevent fraud. If the client reports the loss of the card, and when making a purchase with it, the bank sees in real time the location of the client’s phone in the purchase area where the transaction takes place, the bank can check the information on the client’s application to see if he was trying to deceive him. Or the opposite situation, when a client makes a purchase in a store, the bank sees that the card used for the transaction and the client’s phone are in the same place, the bank can conclude that the card owner is using it. Thanks to such advantages of Big Data, the boundaries of traditional data warehouses are being expanded.

To successfully make a decision to implement Big Data solutions, a company needs to calculate an investment case, and this causes great difficulties due to many unknown components. The paradox of analytics in such cases is predicting the future based on the past, data about which is often missing. In this case, an important factor is clear planning of your initial actions:

  • First, it is necessary to determine one specific business problem for which Big Data technologies will be used; this task will become the core of determining the correctness of the chosen concept. You need to focus on collecting data related to this specific task, and during the proof of concept, you can use various tools, processes and management techniques that will allow you to make more informed decisions in the future.
  • Secondly, it is unlikely that a company without data analytics skills and experience will be able to successfully implement a Big Data project. The necessary knowledge always stems from previous analytics experience, which is the main factor influencing the quality of working with data. Data culture is important because often data analysis reveals hard truths about a business, and it takes data practices to accept and work with those truths.
  • Third, the value of Big Data technologies lies in providing insights. Good analysts remain in short supply in the market. They are usually called specialists who have a deep understanding of the commercial meaning of data and know how to use it correctly. Data analysis is a means to achieve business goals, and to understand the value of Big Data, you need to behave accordingly and understand your actions. In this case, big data will provide a lot useful information about consumers, on the basis of which decisions can be made that are useful for business.

Although Russian market Big Data is just beginning to take shape; individual projects in this area are already being implemented quite successfully. Some of them are successful in the field of data collection, such as projects for the Federal Tax Service and Tinkoff Credit Systems Bank, others - in terms of data analysis and practical application its results: this is the Synqera project.

Tinkoff Credit Systems Bank implemented a project to implement the EMC2 Greenplum platform, which is a tool for massively parallel computing. In recent years, the bank has increased requirements for the speed of processing accumulated information and analyzing data in real time, caused by the high growth rate of the number of users credit cards. The bank announced plans to expand the use of Big Data technologies, in particular for processing unstructured data and working with corporate information received from various sources.

The Federal Tax Service of Russia is currently creating an analytical layer for the federal data warehouse. On its basis a single information space and technology for accessing tax data for statistical and analytical processing. During the implementation of the project, work is being carried out to centralize analytical information from more than 1,200 sources at the local level of the Federal Tax Service.

Another interesting example of big data analysis in real time is the Russian startup Synqera, which developed the Simplate platform. The solution is based on processing large amounts of data; the program analyzes information about customers, their purchase history, age, gender and even mood. At the checkout counters in a chain of cosmetic stores were installed touch screens with sensors that recognize customer emotions. The program determines a person’s mood, analyzes information about him, determines the time of day and scans the store’s discount database, after which it sends targeted messages to the buyer about promotions and special offers. This solution increases customer loyalty and increases retailers' sales.

If we talk about foreign successful cases, then the experience of using Big Data technologies in the Dunkin`Donuts company, which uses real-time data to sell products, is interesting in this regard. Digital displays in stores display offers that change every minute, depending on the time of day and product availability. Using cash receipts, the company receives data on which offers received the greatest response from customers. This data processing approach allowed us to increase profits and turnover of goods in the warehouse.

As the experience of implementing Big Data projects shows, this area is designed to successfully solve modern business problems. At the same time, an important factor in achieving commercial goals when working with Big Data is the choice of the right strategy, which includes analytics that identify consumer needs, as well as the use of innovative technologies in the field of Big Data.

According to a global survey conducted annually by Econsultancy and Adobe since 2012 among corporate marketers, “big data” that characterizes people’s actions on the Internet can do a lot. They are able to optimize offline business processes and help understand how owners mobile devices use them to search for information or simply “do marketing better”, i.e. more efficient. Moreover, the latter function is becoming more and more popular from year to year, as follows from the diagram we presented.

The main areas of work of Internet marketers in terms of customer relations


Source: Econsultancy and Adobe, published– emarketer.com

Note that the nationality of the respondents does not matter much. As a survey conducted by KPMG in 2013 shows, the share of “optimists”, i.e. those who use Big Data when developing a business strategy is 56%, and the variations from region to region are small: from 63% in North American countries to 50% in EMEA.

Using Big Data in different regions of the world


Source: KPMG, published– emarketer.com

Meanwhile, the attitude of marketers to such “fashion trends” is somewhat reminiscent of a well-known joke:

Tell me, Vano, do you like tomatoes?
- I like to eat, but not like this.

Despite the fact that marketers verbally “love” Big Data and seem to even use it, in reality, “everything is complicated,” as they write about their heartfelt affections on social networks.

According to a survey conducted by Circle Research in January 2014 among European marketers, 4 out of 5 respondents do not use Big Data (even though they, of course, “love it”). The reasons are different. There are few inveterate skeptics - 17% and exactly the same number as their antipodes, i.e. those who confidently answer: “Yes.” The rest are hesitating and doubting, the “swamp”. They avoid a direct answer under plausible pretexts such as “not yet, but soon” or “we’ll wait until the others start.”

Use of Big Data by marketers, Europe, January 2014


Source:dnx, published –emarketer.com

What confuses them? Pure nonsense. Some (exactly half of them) simply do not believe this data. Others (there are also quite a few of them - 55%) find it difficult to correlate sets of “data” and “users” with each other. Some people simply have (to put it politically correctly) an internal corporate mess: data is wandering unattended between marketing departments and IT structures. For others, the software cannot cope with the influx of work. And so on. Since the total shares significantly exceed 100%, it is clear that the situation of “multiple barriers” is not uncommon.

Barriers to the use of Big Data in marketing


Source:dnx, published –emarketer.com

Thus, we have to admit that for now “Big Data” is a great potential that still needs to be taken advantage of. By the way, this may be the reason that Big Data is losing its halo of a “fashionable trend,” as evidenced by a survey conducted by the company Econsultancy, which we have already mentioned.

The most significant trends in digital marketing 2013-2014


Source: Econsultancy and Adobe

They are being replaced by another king - content marketing. How long?

It cannot be said that Big Data is some kind of fundamentally new phenomenon. Large data sources have existed for many years: databases of customer purchases, credit histories, lifestyle. And for years, scientists have used this data to help companies assess risk and predict future customer needs. However, today the situation has changed in two aspects:

More sophisticated tools and techniques have emerged to analyze and combine different data sets;

These analytical tools are complemented by an avalanche of new data sources driven by the digitalization of virtually all data collection and measurement methods.

The range of information available is both inspiring and daunting for researchers raised in structured research environments. Consumer sentiment is captured by websites and all sorts of social media. The fact of viewing an advertisement is recorded not only set-top boxes, but also using digital tags and mobile devices communicating with the TV.

Behavioral data (such as call volume, shopping habits and purchases) is now available in real time. Thus, much of what could previously be obtained through research can now be learned using big data sources. And all these information assets are generated constantly, regardless of any research processes. These changes make us wonder whether big data can replace classic market research.

It's not about the data, it's about the questions and answers.

Before we sound the death knell for classic research, we must remind ourselves that it is not the presence of certain data assets that is critical, but something else. What exactly? Our ability to answer questions, that's what. One funny thing about the new world of big data is that the results obtained from new data assets lead to even more questions, and these questions are usually best answered by traditional research. Thus, as big data grows, we see a parallel increase in the availability and need for “small data” that can provide answers to questions from the world of big data.

Consider the situation: a large advertiser continuously monitors store traffic and sales volumes in real time. Existing research methodologies (in which we survey panelists about their purchasing motivations and point-of-sale behavior) help us better target specific buyer segments. These techniques can be expanded to include a wider range of big data assets, to the point where big data becomes a means of passive observation, and research becomes a method of ongoing, narrowly focused investigation of changes or events that require study. This is how big data can free research from unnecessary routine. Primary research no longer has to focus on what is happening (big data will do that). Instead, primary research can focus on explaining why we observe particular trends or deviations from trends. The researcher will be able to think less about obtaining data and more about how to analyze and use it.

At the same time, we see that big data can solve one of our biggest problems: the problem of overly long studies. Examination of the studies themselves has shown that over-inflated research instruments have a negative impact on data quality. Although many experts had long acknowledged this problem, they invariably responded with the phrase, “But I need this information for senior management,” and the long interviews continued.

In the world of big data, where quantitative metrics can be obtained through passive observation, this issue becomes moot. Again, let's think about all these studies regarding consumption. If big data gives us insight into consumption through passive observation, then primary survey research no longer needs to collect this kind of information, and we can finally back up our vision of short surveys with something more than wishful thinking.

Big Data needs your help

Finally, “big” is just one characteristic of big data. The characteristic “large” refers to the size and scale of the data. Of course, this is the main characteristic, since the volume of this data is beyond anything we have worked with before. But other characteristics of these new data streams are also important: they are often poorly formatted, unstructured (or, at best, partially structured), and full of uncertainty. An emerging field of data management, aptly named entity analytics, addresses the problem of cutting through the noise in big data. Its job is to analyze these data sets and figure out how many observations refer to the same person, which observations are current, and which ones are usable.

This type of data cleaning is necessary to remove noise or erroneous data when working with large or small data assets, but it is not sufficient. We must also create context around big data assets based on our previous experience, analytics, and category knowledge. In fact, many analysts point to the ability to manage the uncertainty inherent in big data as a source of competitive advantage, as it enables better decisions to be made.

This is where primary research not only finds itself liberated by big data, but also contributes to content creation and analysis within big data.

A prime example of this is the application of our new fundamentally different brand equity framework to social media (we are talking about developed inMillward Browna new approach to measuring brand equityThe Meaningfully Different Framework– “The Meaningful Difference Paradigm” -R & T ). This model has been tested on behavior within specific markets, is implemented on a standard basis, and is easy to apply to others. marketing directions And information systems for decision support. In other words, our brand equity model, informed by (though not exclusively based on) survey research, has all the features needed to overcome the unstructured, disjointed, and uncertain nature of big data.

Consider consumer sentiment data provided by social media. In raw form, peaks and troughs in consumer sentiment are very often minimally correlated with offline measures of brand equity and behavior: there is simply too much noise in the data. But we can reduce this noise by applying our models of consumer meaning, brand differentiation, dynamics and distinctive features to raw consumer sentiment data is a way to process and aggregate social media data along these dimensions.

Once the data is organized according to our framework, the trends identified typically align with offline brand equity and behavioral measures. Essentially, social media data cannot speak for itself. To use them for this purpose requires our experience and models built around brands. When social media gives us unique information expressed in the language consumers use to describe brands, we must use that language when creating our research to make primary research much more effective.

Benefits of Exempt Research

This brings us back to how big data is not so much replacing research as liberating it. Researchers will be freed from the need to create a new study for each new case. The ever-growing big data assets can be used for different research topics, allowing subsequent primary research to delve deeper into the topic and fill existing gaps. Researchers will be freed from having to rely on over-inflated surveys. Instead, they can use short surveys and focus on the most important parameters, which improves data quality.

With this liberation, researchers will be able to use their established principles and ideas to add precision and meaning to big data assets, leading to new areas for survey research. This cycle should lead to greater understanding on a range of strategic issues and, ultimately, movement towards what should always be our primary goal - to inform and improve the quality of brand and communications decisions.

We hope that we helped you solve your problem with the BIG file. If you don't know where you can download an application from our list, click on the link (this is the name of the program) - you will find more detailed information regarding the place from where to download the safe installation version required application.

A visit to this page should help you answer these or similar questions specifically:

  • How to open a file with a BIG extension?
  • How to convert a BIG file to another format?
  • What is the BIG file format extension?
  • What programs support the BIG file?

If, after viewing the materials on this page, you still have not received a satisfactory answer to any of the questions presented above, this means that the information presented here about the BIG file is incomplete. Contact us using the contact form and write what information you did not find.

What else could cause problems?

There may be more reasons why you cannot open a BIG file (not just the lack of an appropriate application).
Firstly- the BIG file may be incorrectly linked (incompatible) with installed application for its maintenance. In this case, you need to change this connection yourself. For this purpose, click right button mouse over the BIG file you want to edit, click the option "To open with" and then select the program you installed from the list. After this action, problems with opening the BIG file should completely disappear.
Secondly- the file you want to open may simply be damaged. In this case, it would be best to find a new version of it, or download it again from the same source (perhaps for some reason in the previous session the download of the BIG file did not finish and it could not be opened correctly).

Do you want to help?

If you have Additional Information about the BIG file extension, we will be grateful if you share it with users of our site. Use the form below and send us your information about the BIG file.

It is not enough to generate big data and admire its volume and diversity. Information should be useful, and for this it should be properly stored, processed, analyzed and synthesized new knowledge - to draw constructive conclusions

The popular term “” has more of a marketing nature than its own deep content, different from “data” in general. Humanity has been producing and processing information since it learned to read and count, and big or small data is a matter of subjective assessment.

The market for storing and processing information has long been held by such large software providers as Oracle, IBM, Teradata and Microsoft, who base their developments on relational database management systems. The volumes of information stored in such databases are colossal and, without exaggeration, can also be called “large”. But they, as a rule, initially have a well-thought-out internal structure and are connected by certain relationships between the constituent elements.

Since 2008, when Clifford Lynch first coined the term “Big Data,” the information processing industry has recognized the explosive growth of data that was difficult to fit into good old relational stores due to its unstructured nature, and impossible to quickly organize due to the variety and high speed volume growth. It was about data from numerous measuring devices and wearable electronics, message flows in in social networks, weather information and event logs. Another reason why existing solutions were not suitable was the extremely high cost of storing really large volumes of information in relational databases from large manufacturers. The problem of “big data” initially manifested itself in the research environment, whose representatives could not afford to purchase “another couple of Exadata racks.”

The need for relatively inexpensive storage and processing of gigantic volumes of unstructured information led to the creation of specialized software that made it possible to distribute data across clusters of hundreds and thousands of nodes, as well as process them in parallel. This is how Hadoop was born - an open framework under the wing of the Apache Sotware Foundation, which made it possible to create distributed systems based on relatively inexpensive mass-market equipment.

Gradually, Hadoop acquired a set of libraries and utilities and formed an ecosystem of distributed data processing projects around itself. The core of the framework consists of a distributed file system Hadoop Distributed File System(HDFS), a job scheduler and YARN cluster management, a proprietary implementation of MapReduce algorithms for parallel data processing, and a connecting set of common Hadoop Common utilities.

Hadoop was so open that it became the foundation of several commercial implementations based on it - Cloudera, MapR and Hortonworks, each of which offers its own distribution. In 2011, Hadoop was noticed by the mastodons - Oracle, IBM, Teradata - and included in their product lines, not forgetting to dedicate entire sections to Big Data on their own websites with the obligatory mention of the cute elephant.

Since software for working with big data is not a black box into which you can simply shove a huge pile of data and it will turn into something meaningful on its own, for the usual work with information via SQL based on Hadoop, Hive or Impala DBMSs are deployed. If you need the capabilities of NoSQL solutions, use HBase.

The pool of big data processing software does not end with Hadoop alone. You can store information in the Amazon S3 cloud or in the Cassandra NoSQL database, manage cluster resources using Apache Mesos, retrieve and process data using Apache Spark, which can work outside of Hadoop. Spark in Spark Lately is gaining popularity as it promises to speed up the execution of distributed data processing programs compared to Hadoop MapReduce by up to 100 times. Can work both on top of HDFS running Hadoop YARN, and outside the Hadoop framework based on Cassandra, Amazon S3 and Google Cloud Storage (via Alluxio).

Data extracted from distributed systems is processed in analytical tools SAS (Enterprise Miner), IBM (SPSS), Teradata (Aster Analytics) or Oracle (Advanced Analytics) or through a host of other commercial or open-source solutions. An illustration from the Teradata website successfully demonstrates that Big Data software tools such as Hadoop and Spark are integrated into a ready-made information processing infrastructure along with classic systems based on relational databases.

Software for working with big data does not replace all other tools for processing, business analytics, visualization and forecasting, it only puts its mighty shoulder under the pipe with the rushing stream of ever-incoming terabytes of data and directs it in the right direction.

BIG file is a container for storing files (text, audio, video, textures, 3D objects, etc.) of various games. This file format can be found in games from Electronic Arts(EA), for example: FIFA, NHL, NFS, etc. Using the BIG file, you can create various modifications of games that support this format. The file is useful for reducing the disk space consumed by the game. Although the files themselves in the container are stored in uncompressed form, due to file clustering NTFS systems, a large number of individual files can take up more disk space than a single file of the same size. The file format is similar to .ELF, but this format was developed by Electronic Arts.

The .big file extension represents an Electronic Arts game data file. This file can be opened with the following programs: FinalBIG, AssetCacheBuilder (assetcachebuilder.exe), The file is used by various games.