1 summary
The combination query is a multi condition combination query , It is used in many scenarios . By checking the category in the shopping website , Price , Sales volume range and other attributes to filter all products , Select the products that meet the needs of customers , This is a typical composite query . In the case of small amount of data , Backstage through simple sql Statement can quickly filter out the required data , But as the amount of data increases , Continue to use sql sentence , Query efficiency will plummet . When the amount of data reaches a certain order of magnitude , The server will be overburdened and even in danger of hanging up , And the storage of large amounts of data has become a problem . This paper will discuss the case of 100 million level data , A solution to the second level response of multi condition combined query .
2 Scheme thinking
2.1 data storage
Suppose each data has 10 Fields , The size of each field is 4Byte, share 1 100 million data . Through the traditional relational database mysql, use JDBC Insert data in a mixed way of batch and transaction , It takes about half an hour to insert 100 million data , The field may be empty , Lead to redundancy . Storage of massive data , Nowadays, the most commonly used one is HBase. use HBase There are three benefits of : firstly , It is a non relational database , A value with a null field exists only logically , It doesn't exist in space , Therefore, the problem of redundancy is solved ; second , It is a column oriented database , Through simple API Call to scale out the field ; third , It's a partial database , Tabular RowKey
Sort by dictionary ,Region according to RowKey set up split
point conduct shard, In this way, the overall situation is achieved , Distributed index , adopt RowKey Index data can be returned in milliseconds .Hbase Insert data can be called batch insert or through MR Program Insertion , The measured number of data submitted in batch is set to 1000, open 10 In the case of threads , It takes about 100 million data to insert 10 minute . If you need to accelerate the insertion speed , By increasing the number of batch submission , Adjust the number of threads or use MR The procedure was carried out Hbase Write to .Hbase It is a distributed database , Data stores can be stored on multiple nodes , use Zookeeper unified management , Provide data backup and fault recovery function . Therefore, it is used Hbase As data warehouse , Storage of structured data .
2.2 Data query
Hbase There are only two ways to query data in : One is to use get 'tablename',
'rowkey‘’ Directly through rowkey Make a query , The query results of 100 million level data can be returned in milliseconds ; The second is to set the filter for the whole table Scan scanning , This query method takes a long time in the case of massive data , Of course, it is also related to the performance of the server . Our demand is second response , If full table scanning is used , If the amount of data reaches 10000 or 100000, the real-time response cannot be realized , To make such a query , It's often through a similar approach Hive,Pig And so on MapReduce calculation , This method not only wastes the computing resources of the machine , The application is eclipsed by high latency . So we consider using rowKey Query data , If we use rowKey Multi condition combination query for the whole table , This will be right rowKey The setting requirements are very high , Business oriented, this is very unfriendly for programmers , Therefore, we need to establish a secondary index , Scan separate single index tables by index type , Finally, the scan results will be displayed merge, Get the goal rowKey.HBase There is a native way to build secondary index , I.e. use HBase Of coprocessor Coprocessor , Flexible settings can be made according to the business , But it is more complicated , This paper discusses the use of a business model is more fixed , But it's easier and more straightforward to create indexes ——Solr.Solr Is an independent enterprise search application server , yes Apache
Lucene Open source enterprise search platform of the project . Its main functions include full-text retrieval , Hit mark , Faceted search , Dynamic clustering , Database integration , And rich text ( as Word,PDF) Treatment of .Solr It's highly scalable , It also provides distributed search and index replication . We can use it directly Solr This component , By modifying the configuration file to achieve the relevant business requirements . Through the way of batch index building HBase Of 100 million pieces of data in 10 Fields to build index , Time consuming 3383s, About 1 hour . The specific codes are as follows :
public class ThreadsCreateIndexWork {
private static Logger logger =
LoggerFactory.getLogger(ThreadsCreateIndexWork.class);
public static void main(String[] args) throws IOException,
SolrServerException {
if(args.length < 3) {
logger.info("[tableName | queueSize | threadCount]");
logger.info("e.g.| test1 20000 20");
}
String tableName = args[0];
String queueSize = args[1];
String threadCount = args[2];
long start = System.currentTimeMillis();
final Configuration conf;
Properties prop =
PropertiesReaderUtils.getProperties("conf/path.properties");
String server = prop.getProperty("solr.server");
SolrServer solrServer = new ConcurrentUpdateSolrServer(server,
Integer.parseInt(queueSize), Integer.parseInt(threadCount));
conf = HBaseConfiguration.create();
HTable table = new HTable(conf, tableName); // It is specified here HBase Table name
Scan scan = new Scan();
scan.addFamily(Bytes.toBytes("people")); // It is specified here HBase Column families of tables
scan.setCaching(500);
scan.setCacheBlocks(false);
ResultScanner ss = table.getScanner(scan);
try {
for (Result r : ss) {
SolrInputDocument solrDoc = new SolrInputDocument();
solrDoc.addField("rowkey", new String(r.getRow()));
for (KeyValue kv : r.raw()) {
String fieldName = new String(kv.getQualifier());
String fieldValue = new String(kv.getValue());
if (fieldName.equalsIgnoreCase("upperClothing")
|| fieldName.equalsIgnoreCase("lowerClothing")
|| fieldName.equalsIgnoreCase("coatStyle")
|| fieldName.equalsIgnoreCase("trousersStyle")
|| fieldName.equalsIgnoreCase("sex")
|| fieldName.equalsIgnoreCase("age")
|| fieldName.equalsIgnoreCase("angle")
|| fieldName.equalsIgnoreCase("bag")
|| fieldName.equalsIgnoreCase("umbrella")
|| fieldName.equalsIgnoreCase("featureType")){
solrDoc.addField(fieldName, fieldValue);
}
}
solrServer.add(solrDoc);
}
ss.close();
table.close();
} catch (IOException e) {
} finally {
ss.close();
table.close();
}
long time = System.currentTimeMillis() - start;
logger.info("---------- create index with thread use time " +
String.valueOf(time));
}
}
3 Solution
All in all , Multi condition combination query for 100 million level data , The solution given is to use the HBase+Solr The way ,CDH take HBase and Solr They are provided in the form of components , have access to CDH Platform pair HBase and Solr Unified management .Hbase It is used to store massive data ,Solr use SolrCloud Mode to deploy , Provide index building and query . The index can be created offline and batch through the interface , It can also be used HBase
Indexer connect HBase and Solr, Provides automated index building ,CDH The platform is also integrated Hbase Indexer(Lily HBase Indexer) This component .
Technology
Daily Recommendation