08.04.2021

Standard stress test. Standard load test Load test 1s


A mandatory operation for any implementation or change of an existing information system is to assess the required system performance and planning the necessary computing resources for its implementation. Currently, there is no exact solution to this problem in general, and if, despite its complexity and cost, such an algorithm is proposed by any manufacturer, then even small changes in hardware, software version, system configuration or quantity or standard user behavior will lead to significant errors.

However, there are a number of ways to evaluate the hardware and software configuration required to achieve the required performance. All of these methods can be applied in the selection process, but the consumer must understand their areas of application and limitations.

Most of the existing methods for assessing performance are based on one or another type of testing.

There are two main types of testing: component and integral.

Component testing tests individual components of the solution, ranging from the performance of processors or storage subsystems to testing the performance of the server as a whole, but without a payload in the form of a particular business application.

The integral approach is characterized by an assessment of the performance of the solution as a whole, both its software and hardware parts. In this case, both a business application that will be used in the final solution and some model applications that emulate some standard business processes and loads can be used.

The green color of the graph in conjunction with some conventionally selected benchmarks on the right allows you to make a cross-platform generalized assessment of "good" performance.

How to enjoy your test results

You got as a result a certain index of performance (speed). It doesn't matter whether the result is good or bad - this is the result of the PLATFORM's work on your hardware. In the case of a client-server version, this is the result of a complex chain of requests passing through various sections. You get the overall actual result, which is determined by the bottleneck in the system. There is always a bottleneck.

In other words, both the DBMS settings, the OS settings, and the hardware affect the overall command result.

Which server is better

This test, performed on a specific server, gives a result for a set of settings for hardware, operating system, subd, etc. However, a high score on a particular server hardware means that, under normal conditions, the same result will be achieved on an identical server hardware. This test is a free help in the ability to compare the installation of 1C: Enterprise under Windows and Linux, three different DBMS supported by the 1C: Enterprise 8 platform.

Test safety

The test is absolutely safe. It does not lead to a "crash" of the server (there is no "stress" -algorithm) and does not require preliminary measures, even on a "combat" server. Confidential data is also not recorded in the test results. Information about the parameters of CPU, RAM, HDD is collected. Device serial numbers are not collected. All this can be easily verified - the test code is 100% open. No transfer of information is possible without your knowledge.

Classification TPC-A-local Throughput / TPC-1C-GILV-A

The test belongs to the section of universal integral cross-platform tests. Moreover, it is applicable for file and client-server operation of 1C: Enterprise. The test works for all DBMS supported by 1C.

Versatility allows you to make a generalized assessment of performance without being tied to a specific typical platform configuration.

On the other hand, this means that for accurate calculations of a custom project, the test allows you to make a preliminary assessment before specialized load testing.

Download test

This test is not commercial and can be downloaded free for 8.2 and free for 8.3.

Technical details

What happens in the test within the "one" cycle of the operation?

Specifics of using the PostgreSQL database test

Set the value of the standard_conforming_strings parameter in the postgresql.conf configuration file to ‘off’

How to measure the load on iron

It should be noted that the test itself already partially measures the measurement. For a more detailed picture, I recommend using the Process Explorer utility by Mark Rusinovich.

The figure shows an example of metering for the file version.

For 1C server roles, MS SQL 2008 DBMS server for 50 users.

According to the server expert, we collect the hardware:

Choosing a platform: IBM x3650 M3
Choosing a processor: Intel Xeon E5506 - 1 pc.
Choosing RAM: 4 strips of 4GB
Choosing a hard drive: 3 SAS 146 GB RAID5

Used software:

MS Windows 2008 x64
MS SQL DBMS 2008 x64
Server 1C 8.2 x64

Test environment: for carrying out stress testing, the configuration 1C 8.2: "Standard stress test" was used.

Testing progress:

The 1C client session was launched on the local server in agent mode and in testing mode.
In the test configuration, the initial number of emulated standard 1C users, creating and deleting documents and reports, was specified as 20. The step of increasing the number of users, after the tests, was set equal to 20 users.

Initially (without user connections), the DBMS occupies 569 MB of RAM (2 databases are created: 1C 8.2 configuration: SCP and test configuration), the memory occupied by the system is 2.56 GB.
During testing (up to 110 users), memory for the DBMS is allocated up to 12 GB, one 1C test session takes 55 MB (55MBx200 = 11GB). For comparison, one real user session (1C client application) takes about 300 - 500 MB. The size of the memory allocated for the 1C client application is indicated for the user working in the standard 1C: Trade or 1C: UPP configuration. The 1C server service (rphost) practically does not use the OP, since it only translates requests from the client side to the DBMS (the standard uses TCP 1541 and TCP 475 for the 1C protection server).

The use of CPU resources was divided between the 1C server service (rphost) and the DBMS service (sqlservr). With a load of 40 users, rphost took 37% of the CPU, sqlservr took 30%. With a load of 60 users, rphost took up 47% of the CPU power, sqlservr took up 29%.

During the deletion of the generated documents, the sqlsrvr service accessed the disk subsystem for writing at speeds up to 6.5MB / sec (about 52Mbps).

The load on the network between the 1C server and the DBMS (on the local lookback interfaces) was 10 Mb / s.
Test result issued by 1C test configuration:

Parameters: Start test 000000006 from 24.05.2012 12:44:16
Standard stress test, version 2.0.4.11
Testing started on May 23, 2012 12:36:39 PM. Runtime: 57.1 minutes.
Test conditions
"Server 1C: Enterprise: test
Infobase name: testcenter_82
Virtual users: TEST, "

Conclusions:

It is necessary to loosen the server configuration as the current one is 100% redundant for 50 users.
It is necessary to perform testing using the second server to start the emulated users and check the network load, the expected load is 10 Mb / s.
The 1C architecture consists of 4 blocks: 1C server, DBMS, 1C security server and 1C client. In this test, all these functions were run on the same server.

With a heavy load on the 1C server, there are the following recommendations:

To distribute the roles of 1C server, DBMS server, 1C protection server and 1C client applications (for better performance, 1C client applications should be run on a terminal server).
On the DBMS server, the following storage structure must be used: OS must be on RAID 1, DBMS data files (.mdf, .ndf) on separate RAID 0, log files (.ldf) on separate RAID 0, temporary files and paging file on a separate disk.

Background

A mandatory operation for any implementation or change of an existing information system is to assess the required system performance and planning the necessary computing resources for its implementation. At present, there is no exact solution to this problem in general form, and if, despite
its complexity and cost, such an algorithm will be proposed by any manufacturer, then even small changes in hardware, software version, system configuration or the number or standard behavior of users will lead to significant errors.
Nevertheless, there are a sufficient number of ways. All of these methods can be applied in the selection process, but the consumer must understand their areas of application and limitations.

Most of the existing methods for assessing performance are based on the fact
or some other type of testing.

There are two main types of testing:
component and integral.

In component testing,
testing individual components of the solution, starting from performance
processors or storage subsystems before testing
server performance as a whole, but without a payload in the form of one or another
business applications.

The integral approach is characterized by the assessment
the performance of the solution as a whole, both its software and hardware parts.
In this case, it can be used as a business application that will be used
in the final solution and some model applications that emulate
some standard business processes and workloads.

Tests TPC and other universal tests allow you to select the most promising platforms and compare offers from different manufacturers, but they are only reference information that does not take into account the specifics of the business. Specialized tests allow you to more accurately select a specific server model and configuration. However, the most informed decisions are made only on the basis of the results of carrying out stress tests. Only they allow you to optimally configure the selected
server platform and tune it for maximum performance.

What TPC-1C-GILV

This is a series of independent tests designed to assess the performance of the 1C: Enterprise 8.1 platform on your computer (s).

Of course, an "independent" test means that it is not sponsored by 1C.

The test is currently available " TPC-A-local Throughput / TPC-1C-GILV-A "(last updated August 2008 version 1.0.3)

Test idea TPC-A-local Throughput / TPC-1C-GILV-A

You download the configuration dump file (~ 400 Kb) from this site and upload it to yourself. If you expand the configuration to file options, then to a large extent the test will test the combination "Your computer's CPU - HDD where the base is located".

If you expand the configuration into a client-server version, then the CPU of the application server - the CPU of the subd server - - the subd server will be subjected to the predominantly load.

The test intensively writes 5000 documents. There is no deep sense in the business logic of the code, the performance of the document X.

The main beauty of the test is that you don't need to know the technical details. The test is performed by itself and gives the grade itself. In addition, it is not necessary to tell you the result to whom either :)

You can compare the performance of several servers, or one server with different characteristics of the disk subsystem.

By running the test from the application server and from the client over the network, you can understand the network impact from client to server.

How to run the test

It is very easy to run the test. Gotta push a button

and wait until the test indicator (to the right of the button) reaches 100%.

The test usually takes about 8 minutes.

What the test results mean

The test result is reported as the "write rate" of the test data. The test error is 2 units. For an accurate assessment, you can repeat the test 3 times.

After the test indicator reaches 100%, you will see some graphs like this:

Below the graphs are some previous similar tests.

The color of the graph is indicative of the current quality of "overall" performance for non-blocking work.

The green color of the graph in conjunction with some conventionally selected benchmarks on the right allows you to make a cross-platform generalized assessment of "good" performance :)

How to enjoy your test results

As a result, you got a certain performance index (count the speed). It doesn't matter whether the result is good or bad - this is the result of the PLATFORM operation on your hardware. In the case of a client-server variant, this is the result. You get the overall actual result, which is determined by the TIGHTEST PLACE in the system. THERE IS ALWAYS A NARROW PLACE!

In other words, both the DBMS settings, the OS settings, and the hardware affect the overall command result :)

Which server is better

This test, performed on a specific server, gives a result in terms of a set of hardware settings, operating system, subassembly, etc. However, a high score on a particular server hardware means that, under normal conditions, the same result will be achieved on an identical server hardware. This test is a free help in the ability to compare the installation of 1C: Enterprise under Windows and Linux, three different DBMS supported by the 1C: Enterprise 8.1 platform.

Test safety

The test is absolutely safe. It does not lead to a "crash" of the server (there is no "stress" -algorithm) and does not require any preliminary measures even on a "combat" server. Confidential data is also not recorded in the test results. Information about the parameters of CPU, RAM, HDD is collected. Device serial numbers are not collected. All this can be easily verified - the test code is 100% open. No information can be sent without your knowledge.

How to publish test results

If you would like to help develop a test, you can run a series of tests on your servers. Then, of the general list of the tests made, leave only those that you want to publish and send a dt-upload with the results.

The data will be manually checked (that they are not erroneous), the addressee of the tests is added to the column "author" of the tests and added to the upload available for download to everyone.

Classification TPC-A-local Throughput / TPC-1C-GILV-A

The test belongs to the section of universal integral cross-platform tests. Moreover, it is applicable for file and client-server operation of 1C: Enterprise. The test works for all DBMS supported by 1C.

Versatility allows you to make a generalized assessment of performance without being tied to a specific typical platform configuration.

On the other hand, this means that for accurate calculations of a custom project, the test allows you to make a preliminary assessment before specialized load testing (for example, using 1C: Test Center).

Note. Test modification " A"means" automatic block management. "After the release of the official versions of typical solutions from 1C, it is planned to modify the test to work in the" controlled blocking "mode and designate with the letter" M".

Download test

This test is not commercial and.

Test results

Top - 3 best 1C client-server installations on MS SQL Server. You can get into this table too. You can see the results in more detail by downloading the test.

Technical details

What happens in the test within the "one" cycle of the operation?

How to measure the load on iron

It should be noted that the test itself already partially measures the measurement. For a more detailed picture, I recommend using the utility of Mark Rusinovich.

The figure shows an example of metering for the file version.

Contacts for TPC-1C-GILV

http: // site / 1c / tpc

test results, suggestions for development

IGOR CHUFAROV, Head of the Department of Integrated Automated Systems of JSC "Radiozavod", [email protected]

40 points in the Gilev test -
myth or reality?

Violent discussions continue around the Gilev test, including those based on conflicting results. I will share my experience of using this tool

The origins of the ambiguity

When faced with the Gilev test for the first time, many experts are surprised at the uncharacteristic results that are obtained with its help. For example, desktop hardware can perform better than an expensive, powerful server. The file version gets a higher rating than SQL. And if with the second incident everything is more or less clear, this is explained in the documentation for the test, and in numerous discussions on the forums, then no one has drawn any definite conclusions from the relatively low results on expensive server equipment.

Before informing about the results obtained, it is worth mentioning a few words about the Gilev test itself, tell what it is.

The name "Gilev Test" means the TPC-1C stress test, available for free download at.

Known results

The source provides interesting results of comparing a server based on 2 * Intel Xeon E5620 2.4 Ghz with 48 GB of RAM and a personal computer based on Intel Core i5 3.0 Ghz with 16 GB of RAM. Without additional settings and tweaks, which is called "out of the box", the workstation "broke" the server in the Gilev test, showing 155% higher performance.

The server scored about 17 points, while the desktop got more than 40. As a result of experiments (most of which consisted of cutting the resources of the desktop to determine how this degraded the test result) and server settings, the authors of the article managed to achieve 25.6 points.

The result, frankly speaking, is far from 40 on a regular system unit. So, is it better to deploy a 1C server on budget hardware purchased at the nearest kiosk? Of course not.

Discussion at Infostart Event 2016

A few days before my trip to the Infostart Event 2016 conference in St. Petersburg, an interesting two-hour video about the operation of the 1C: Enterprise system in virtualized environments, selection of equipment and performance issues appeared on the website courses-1s.rf.

At the Infostart Event 2016 conference, it was assumed that the author of this webinar, Andrey Burmistrov, a 1C expert on technological issues of large implementations, who worked both in 1C and many large implementations in our country, mentored more than 2000 specialists in the course “Optimizing 1C "And preparation for 1C: Expert.

In the wake of interest in the topic, I talked to Andrey both virtually and subsequently at the conference itself. One of the questions that I asked him during the HighLoad roundtable related to the possibility of releasing a webinar with reference testing of various server hardware options - with an SSD, with a conventional hard drive, in various hardware configurations. The answer went something like this: “Thank you, the idea is interesting. Maybe we'll do it. Just give us Intel P3700, P3600 and we will gladly test it. It's not easy to get an SSD somewhere for testing for a week. "

So, it turned out that it was with their own eyes that almost none of my interlocutors saw more than 30 points in SQL mode, and those who saw it noted that it was not on server hardware.

Vicious circle? A serious question has matured: "40 points in the Gilev test on server hardware in SQL mode - myth or reality?"

Read the entire article in the journal "System Administrator", No. 5 for 2017 on pages 10-15.

A PDF version of this issue can be purchased from our

Products for accounting and management accounting of 1C company are most widespread in the territory of the Russian Federation. Thousands of companies conduct their business based on typical and specialized 1C configurations. With such massive use, a number of questions regularly arise on optimizing the software budget and wisely using resources. Disputes around the server parts of this complex do not subside, in particular, on which operating system to base the 1C server and which DBMS to entrust the processing of 1C databases. During our tests, we will try to answer these questions.

Test participants

MS Server operating system and MS SQL DBMS

  • 1C firm openly positions this bundle as the main working model, respectively, 1C products are created primarily for it
  • Availability of the protocol of direct high-speed information exchange SharedMemory
  • There are official technical support and service contracts
  • There is a knowledge base and tons of information on installation and fine-tuning of 1C + MS SQL

Unix operating system and PostgreSQL DBMS

  • The system is completely free (except for the license for the 1C: Enterprise server)
  • It is possible to flexibly configure many parameters that improve the performance of the DBMS
  • 1C products support PostgreSQL DBMS
  • There is a possibility of database replication

Of course, the cost of the project, fault tolerance and technical support are important criteria when choosing an information system for 1C. However, there is a factor that, in most cases, radically influences decision making - this is speed.

Since there is a great variety of technical literature on these two systems on the Internet, one could argue for a long time about long comparative tables, which, depending on the goals, emphasize the benefits of a particular product. You can debate about this or that parameter among hundreds of others of the same - how unique it is in its kind and how it affects the achievement of the result. But theory without practice is dead - we suggest in this article to omit the theory and go directly to the facts, in order to test the performance of both information systems in practice with a certain level of recommended settings and in various server architecture options (see Table 2).

Testing methods

In our tests, we will rely on two methods of synthetic load generation and user experience simulation in 1C. These are Gilev's test (TPC-1C) and a special 1C Test Center test from the 1C: KIP toolkit with special user scenarios.

Gilev test (TPC-1C)

Gilev's test belongs to the section of universal cross-platform load tests. It can be used for both file and client-server architecture of 1C: Enterprise. The test measures the amount of work per unit of time in one thread and is suitable for assessing the speed of single-threaded loads, including the speed of drawing the interface, the impact of resource costs, reposting documents, month-closing procedures, payroll calculation, etc. Versatility allows you to make a summary performance assessment without being tied to a single platform configuration. The result of the test is a total assessment of the measured system 1C, expressed in arbitrary units.

Specialized test from the "Test Center" 1C: KIP toolkit

Test center- a tool for carrying out multi-user stress testing of systems based on 1C: Enterprise 8 (see Figure 1). With its help, it is possible to simulate the work of a company without the participation of real users, which makes it possible to evaluate the applicability, performance and scalability of an information system in real conditions. The system is a configuration that provides a mechanism to control the testing process. To test the infobase, it is necessary to integrate the configuration of the Test Center into the configuration of the tested database by comparing and merging configurations. As a result of merging, objects and common modules necessary for the Test Center operation will be added to the metadata of the tested base.

Figure 1 - Scheme of work "Test Center" 1C: instrumentation

Thus, using the 1C: instrumentation toolkit, based on the data available in real 1C production bases, the programmer generates a full-fledged automatic test script based on the list of documents and reference books that are key for this type of configuration (application for spending funds, ordering a supplier, selling goods, etc. services, etc.). When you run the script, the Test Center will automatically reproduce the multi-user activity described in the script. To do this, the Test Center will create the required number of virtual users (in accordance with the list of roles) and start performing actions.

Testing parameters

When setting up test scripts to reliably simulate the simultaneous work of a large number of users, certain test parameters are set for each type of document (see Table 1):

  • Document - indicates a specific document in the working base, on the basis of which load testing will be performed
  • Launch priority - forms the order of launching tests for each type of document
  • Number of documents - determines the volume of generated test documents
  • Pause, seconds - delay when starting a series of tests, within the same type of documents
  • The number of lines in the document is an informational indicator that informs about the "massiveness" of the test document, which affects the processing time and the load on resources

Tests are performed in 3 iterations, the results are written to a table. Thus, the obtained test results, measured in seconds, most realistically and objectively reflect the performance level of 1C bases in conditions as close as possible to real ones (see tables 3.1 and 3.2).

Table 1. Parameters of test scenarios

Buyer invoice
Document Start Priority Number of documents Pause, seconds Number of lines in the document
Role 1 Buyer invoice 1 25 51 62
Receipt of goods 2 25 80
Sale of goods 3 25 103
Money orders 4 25 1
Buyer returns 5 25 82
Role 25 10 65 79
Receipt of goods 1 22 80
Sale of goods 2 25 103
Money orders 3 25 1
Buyer returns 4 25 75
Role 3 Buyer invoice 4 15 45 76
Receipt of goods 5 26 80
Sale of goods 1 52 103
Money orders 2 26 1
Buyer returns 3 32 90
Role 4 Buyer invoice 3 45 38 70
Receipt of goods 4 30 80
Sale of goods 5 30 103
Money orders 1 20 1
Buyer returns 2 20 86
Role 5 Buyer invoice 2 30 73 76
Receipt of goods 3 30 80
Sale of goods 4 30 103
Money orders 5 18 1
Buyer returns 1 18 91
Role 6 Buyer invoice 1 40 35 86
Receipt of goods 2 40 80
Sale of goods 3 40 103
Money orders 4 40 1
Buyer returns 5 40 88
Role 7 Buyer invoice 5 25 68 80
Receipt of goods 1 25 80
Sale of goods 2 25 103
Money orders 3 25 1
Buyer returns 4 25 90
Role 8 Buyer invoice 3 25 62 87
Receipt of goods 4 25 80
Sale of goods 5 25 103
Money orders 1 25 1
Buyer returns 2 25 92
Role 9 Buyer invoice 2 20 82 82
Receipt of goods 4 20 80
Sale of goods 5 20 103
Money orders 1 20 1
Buyer returns 3 20 98
Role 10 Buyer invoice 4 50 2 92
Receipt of goods 1 50 80
Sale of goods 2 50 103
Money orders 5 50 1
Buyer returns 3 50 98

Table 2. Technical characteristics of the test bench

# N \ n Role of the system CPU \ vCPU RAM, GB Disk input / output system
1 Terminal Server- virtual machine for test management 4 cores
2.9 GHz
16 GB Intel Sata SSD Raid1
2 Scenario 1. Server 1C + hardware DBMS Intel Xeon E5-2690
16 cores
96 GB Intel Sata SSD Raid1
3 Scenario 2. Server 1C + virtual DBMS 16 cores
2.9 GHz
64 GB Intel Sata SSD Raid1
4 Scenario 3. Server 1C virtual 16 cores
2.9 GHz
32 GB Intel Sata SSD Raid1
5 Scenario 4. Virtual DBMS server 16 cores
2.9 GHz
32 GB Intel Sata SSD Raid1
6 Software
  • Microsoft Windows Server 2016 DataCenter
  • Microsoft Windows Server 2016 Standart
  • Microsoft SQL Server 2016 SP1 (13.0.4001.0)
  • Hyper-V hypervisor
  • Server 1C: Enterprise 8.3.10.2667
  • CentOS 7.4.1708 (x64)
  • PostgreSQL 9.6.5 + Patch PostgreSQL 9.6.5-4.1C
7 Configurations 1C
  • Single-threaded synthetic test of the 1C: Enterprise platform + Multi-threaded test of writing to disk (2.1.0.7) Gilev Vyacheslav Valerievich
  • Size 0,072 GB
  • Configuration: Enterprise Accounting CORP, revision 3.0 (3.0.52.39)
  • Application: Thin Client
  • Interface Option: Taxi
  • Size 9.2 GB
  • Platform: 1C: Enterprise 8.3 (8.3.10.2667)
  • Configuration: Trade Management Revision 11 (11.3.4.21)
  • Mode: Server (compression: enhanced)
  • Application: Thin Client
  • Localization: Information base: Russian (Russia), Session: Russian (Russia)
  • Interface Option: Taxi
  • Size 11.8 GB

Table 3.1 Test results using the Gilev test (TPC-1C). The highest value is considered optimal.

Table 3.2 Test results using a special test 1C: KIP. The smallest value is considered optimal.

Microsoft Server operating system Unix class operating system
List of tests (average value based on the results of a series of 3 tests) Hardware server 1C + DBMS, SharedMemory protocol Virtual server 1C + DBMS, SharedMemory protocol 1C hardware server and DBMS hardware server, TCP-IP protocol 1C virtual server and DBMS virtual server, TCP-IP protocol
Carrying out 1C: instrumentation tests on an existing base, configuration of the Enterprise Accounting
Turnover balance sheet 1,741 sec 2,473 sec 2.873 sec 2,522 sec 13.866 sec 9.751 sec
Carrying out the return of goods from buyers 0.695 sec 0.775 sec 0.756 sec 0.781 sec 0.499 sec 0.719 sec
Execution of payment orders 0.048 sec 0.058 sec 0.063 sec 0.064 sec 0.037 sec 0.065 sec
Conducting PTI 0.454 sec 0.548 sec 0.535 sec 0.556 sec 0.362 sec 0.568 sec
Carrying out the sale of goods and services 0.667 sec 0.759 sec 0.747 sec 0.879 sec 0.544 sec 0.802 sec
Posting an invoice for payment 0.028 sec 0.037 sec 0.037 sec 0.038 sec 0.026 sec 0.038 sec
Calculation of cost estimates 3.071 sec 3.657 sec 4.094 sec 3.768 sec 15.175 sec 10.68 sec
Carrying out 1C: instrumentation tests on an existing base, Trade Management configuration
Carrying out and returning from the client 2.192 sec 2.113 sec 2,070 sec 2,418 sec 1.417 sec 1.494 sec
Carrying out and returning goods to the supplier 1,446 sec 1,410 sec 1.359 sec 1,467 sec 0.790 sec 0.849 sec
Posting a sales order 0.355 sec 0.344 sec 0.335 sec 0.361 sec 0.297 sec 0.299 sec
Recount of goods 0.140 sec 0.134 sec 0.131 sec 0.144 sec 0.100 sec 0.097 sec
Carrying out the admission of technical specifications 1.499 sec 1,438 sec 1.412 sec 1.524 sec 1.097 sec 1.189 sec
Carrying out the implementation of TU 1,390 sec 1,355 sec 1.308 sec 1,426 sec 1.093 sec 1.114 sec
Carrying out RSC 0.759 sec 0.729 sec 0.713 sec 0.759 sec 0.748 sec 0.735 sec
  1. In a special 1C test, operations of "data reading and complex calculations", such as "Turnover balance sheet" and "Calculation of cost estimates" are performed several times faster on MS SQL from Microsoft.
  2. In the operations of "writing data and posting documents" in most tests, the PostgreSQL DBMS, optimized for 1C, shows the best result.
  3. Gilev's synthetic test also shows the advantage of PostgreSQL. This fact is connected with the fact that the synthetic test is based on measuring the speed of creation and posting of certain types of documents, which is also considered the operations of "data recording and document posting."

Let's finish with the cross-platform comparison, let's move on to comparisons within each system:

  1. As expected, 1C tests on a hardware platform show better results than on a virtual one. The difference in the results of a special 1C test in both cases is small, which indicates the gradual optimization of virtual hypervisors by manufacturers.
  2. It is also expected that the use of shared memory technology (SharedMemory) speeds up the process of data exchange between the 1C server and the DBMS. Accordingly, the test results are slightly better than that of the scheme with the network interaction of these two services via the TCP-IP protocol.

We can conclude that with the correct configuration of 1C and DBMS, you can achieve significant results even with free software. Therefore, when designing a new IT structure for 1C, it is necessary to take into account the level of load on the system, the type of prevailing operations in the database, the available budget, the presence of a specialist in non-standard DBMS, the need for integration with external services, etc. Based on these data, it is already possible to select the required solution.

Read on for further testing.


2021
maccase.ru - Android. Brands. Iron. news