1s 8.3 works very slowly what to do. Automation Tips
I often get asked questions like:
- because of what the server 1C slows down?
- computer with 1C works very slowly
- client 1C is terribly slow
What to do and how to win it, and so on in order:
Clients work very slowly with the server version of 1C
In addition to the slow work of 1C, there is also slow work with network files. The problem occurs during normal operation and with RDP
to solve this, after each installation of the Seven or the 2008 server, I always run
netsh int tcp set global autotuning=disabled
netsh int tcp set global autotuninglevel=disabled
netsh int tcp set global rss=disabled chimney=disabled
and the network works without problems
sometimes the best is:
netsh interface tcp set global autotuning= HighlyRestricted
here is what the setup looks like
Configure Antivirus or Windows Firewall
How to configure the Anti-Virus or Windows firewall for the operation of the 1C server (a bundle from the 1C Server: Enterprise and MS SQL 2008, for example).
Add rules:
- If the SQL server accepts connections on the standard TCP port 1433, then we allow it.
- If the SQL port is dynamic, you must allow connections to the %ProgramFiles%\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\sqlservr.exe application.
- Server 1C works on ports 1541, cluster 1540 and range 1560-1591. For completely mystical reasons, sometimes such a list of open ports still does not allow connections to the server. To make it work for sure, allow the range 1540-1591.
Server / Computer Performance Tuning
In order for the computer to work with maximum performance, you need to configure it for this:
1. BIOS settings
- In the server BIOS, disable all settings to save processor power.
- If there is "C1E" & be sure to DISCONNECT!!
- For some not very parallel tasks, it is also recommended to turn off hyperthreading in the bios
- In some cases (especially for HP!) you need to go into the server's BIOS and turn OFF the items there, in the name of which there are EIST, Intel SpeedStep and C1E.
- Instead, you need to find in the same place the items related to the processor, in the name of which there is Turbo Boost, and ENABLE them.
- If the BIOS has a general indication of the power saving mode & enable it in the maximum performance mode (it can also be called "aggressive")
2. Scheme settings in the operating system - High performance
Servers with Intel Sandy Bridge architecture can dynamically change processor frequencies.
AT recent times users and administrators are increasingly beginning to complain that new 1C configurations developed on the basis of a managed application are slow, in some cases unacceptably slow. It is clear that new configurations contain new functions and capabilities, and therefore are more demanding on resources, but most users do not have an understanding of what primarily affects the operation of 1C in file mode. Let's try to fix this gap.
In ours, we have already touched on the impact of the performance of the disk subsystem on the speed of 1C, however this study concerned the local use of the application on a separate PC or terminal server. At the same time, most small implementations involve working with a file base over a network, where one of the user's PCs is used as a server, or a dedicated file server based on a regular, most often also inexpensive, computer.
A small study of Russian-language resources on 1C showed that this issue is diligently bypassed; in case of problems, it is usually advised to switch to client-server or terminal mode. And it has also become almost generally accepted that configurations on a managed application work much slower than usual ones. As a rule, arguments are given "iron": "here Accounting 2.0 just flew, and the" troika "is barely moving, of course, there is some truth in these words, so let's try to figure it out.
Resource consumption at a glance
Before starting this study, we set ourselves two goals: to find out if managed application-based configurations are actually slower than conventional configurations, and which resources have the highest impact on performance.
For testing, we took two virtual machines running Windows Server 2012 R2 and Windows 8.1, respectively, allocating them 2 cores of the host Core i5-4670 and 2 GB random access memory, which is roughly equivalent to an average office machine. The server was placed on a RAID 0 array of two, and the client was placed on a similar array of general-purpose disks.
As experimental bases, we have chosen several configurations of Accounting 2.0, release 2.0.64.12 , which was then updated to 3.0.38.52 , all configurations were run on the platform 8.3.5.1443 .
The first thing that attracts attention is the increased size of the information base of the Troika, and it has grown significantly, as well as much greater appetites for RAM:
We are already ready to hear the usual: "what did they add to this trio", but let's not rush. Unlike users of client-server versions, which require a more or less qualified administrator, users of file versions rarely think about database maintenance. Also, employees of specialized firms serving (read - updating) these bases rarely think about it.
Meanwhile, the 1C information base is a full-fledged DBMS of its own format, which also requires maintenance, and for this there is even a tool called Testing and fixing the infobase. Perhaps the name played a cruel joke, which, as it were, implies that this is a tool for troubleshooting, but poor performance is also a problem, and restructuring and reindexing, together with table compression, are well-known database optimization tools for any DBMS administrator. Let's check?
After applying the selected actions, the database dramatically "lost weight", becoming even smaller than the "two", which no one has ever optimized either, and the RAM consumption also slightly decreased.
Subsequently, after loading new classifiers and directories, creating indices, etc. the size of the base will grow, in general, the bases of the "three" are larger than the bases of the "two". However, this is not more important, if the second version was content with 150-200 MB of RAM, then the new edition needs half a gigabyte already, and this value should be taken into account when planning the necessary resources to work with the program.
Net
Network bandwidth is one of the most important parameters for network applications, especially as 1C in file mode, moving significant amounts of data over the network. Most networks of small enterprises are built on the basis of inexpensive 100 Mbps equipment, so we started testing by comparing the performance indicators of 1C in 100 Mbps and 1 Gbps networks.
What happens when you start the 1C file base over the network? The client downloads a fairly large amount of information into temporary folders, especially if this is the first "cold" launch. At 100 Mbps, we expectedly run into the bandwidth and downloading can take a significant amount of time, in our case, about 40 seconds (the price of the graph division is 4 seconds).
The second launch is faster, since some of the data is stored in the cache and remains there until the reboot. The transition to a gigabit network can significantly speed up the loading of the program, both "cold" and "hot", and the ratio of values is observed. Therefore, we decided to express the result in relative terms, taking the largest value of each measurement as 100%:
As you can see from the graphs, Accounting 2.0 loads twice as fast at any network speed, the transition from 100 Mbps to 1 Gbps allows you to speed up the download time by four times. There is no difference between the optimized and non-optimized Troika databases in this mode.
We also checked the impact of network speed on heavy-duty operation, for example, during group re-hosting. The result is also expressed in relative terms:
Here it is already more interesting, the optimized base of the "troika" in a 100 Mbit / s network works at the same speed as the "two", and the unoptimized one shows twice the worse result. On a gigabit, the ratios are preserved, the non-optimized "three" is also twice as slow as the "two", and the optimized one lags behind by a third. Also, the transition to 1 Gb / s allows you to reduce the execution time by a factor of three for version 2.0 and two times for version 3.0.
In order to evaluate the impact of network speed on daily work, we used performance measurement by performing a sequence of predefined actions in each database.
Actually, for everyday tasks, network bandwidth is not a bottleneck, an unoptimized "three" is only 20% slower than a two, and after optimization it turns out to be about the same faster - the advantages of working in thin client mode affect. The transition to 1 Gb / s does not give the optimized base any advantages, and the non-optimized base and the deuce start to work faster, showing a small difference between them.
From the tests carried out, it becomes clear that the network is not a bottleneck for new configurations, and the managed application works even faster than usual. You can also recommend switching to 1 Gb/s if heavy tasks and database loading speed are critical for you, in other cases, new configurations allow you to work effectively even in slow 100 Mb/s networks.
So why does 1C slow down? We will investigate further.
Server disk subsystem and SSD
In the previous article, we achieved an increase in 1C performance by placing databases on SSD. Perhaps the performance of the server disk subsystem is not enough? We took performance measurements disk server during group holding in two databases at once and got a rather optimistic result.
Despite the relatively high number of input / output operations per second (IOPS) - 913, the queue length did not exceed 1.84, which is very good result. Based on it, we can make an assumption that a mirror from ordinary disks will be enough for the normal operation of 8-10 network clients in heavy modes.
So is an SSD needed on a server? The best answer to this question will help testing, which we conducted using a similar methodology, the network connection is 1 Gb / s everywhere, the result is also expressed in relative values.
Let's start with the database loading speed.
It may seem surprising to someone, but the SSD base on the server does not affect the download speed of the database. The main limiting factor here, as shown by the previous test, is network throughput and client performance.
Let's move on to rewiring:
We have already noted above that the disk performance is quite enough even for heavy-duty operation, so the speed of the SSD is also not affected, except for the unoptimized base, which caught up with the optimized one on the SSD. Actually, this once again confirms that optimization operations organize information in the database, reducing the number of random I/O operations and increasing the speed of access to it.
On everyday tasks, the picture is similar:
Only the non-optimized base receives the benefit from the SSD. Of course, you can purchase an SSD, but it would be much better to think about the timely maintenance of the bases. Also, don't forget about defragmenting the infobase partition on the server.
Client disk subsystem and SSD
We analyzed the influence of SSD on the speed of locally installed 1C in , much of what has been said is also true for working in network mode. Indeed, 1C quite actively uses disk resources, including for background and scheduled tasks. In the figure below, you can see how Accounting 3.0 is quite actively accessing the disk for about 40 seconds after loading.
But at the same time, you should be aware that for a workstation where active work is produced with one or two information bases of performance resources of a conventional HDD of a mass series is quite enough. Purchasing an SSD can speed up some processes, but you will not notice a radical acceleration in everyday work, since, for example, loading will be limited throughput networks.
Slow HDD can slow down some operations, but by itself cannot cause a program to slow down.
RAM
Despite the fact that RAM is now obscenely cheap, many workstations continue to work with the amount of memory that was installed when they were purchased. This is where the first problems lie in wait. Based on the fact that the average "troika" requires about 500 MB of memory, we can assume that the total amount of RAM of 1 GB to work with the program will not be enough.
We reduced the system memory to 1 GB and launched two infobases.
At first glance, everything is not so bad, the program has moderated its appetites and completely kept within the available memory, but let's not forget that the need for operational data has not changed, so where did they go? Flushed to disk, cache, swap, etc., the essence of this operation is that data that is not needed at the moment is sent from fast RAM, the amount of which is not enough, to slow disk.
Where it leads? Let's see how the system resources are used in heavy operations, for example, let's start a group rerun in two databases at once. First on a system with 2 GB of RAM:
As you can see, the system actively uses the network to receive data and the processor to process them, disk activity is insignificant, in the process of processing it occasionally grows, but is not a limiting factor.
Now let's reduce the memory to 1 GB:
The situation is changing radically, the main load now falls on the hard disk, the processor and the network are idle, waiting for the system to read the necessary data from disk into memory and send unnecessary data there.
At the same time, even subjective work with two open databases on a system with 1 GB of memory turned out to be extremely uncomfortable, directories and magazines opened with a significant delay and active disk access. For example, opening the Sales of goods and services magazine took about 20 seconds and was accompanied by high disk activity all this time (highlighted by a red line).
In order to objectively assess the impact of RAM on the performance of configurations based on a managed application, we conducted three measurements: the loading speed of the first base, the loading speed of the second base, and group reposting in one of the bases. Both bases are completely identical and created by copying the optimized base. The result is expressed in relative units.
The result speaks for itself, if the loading time grows by about a third, which is still quite tolerable, then the time for performing operations in the database grows three times, there is no need to talk about any comfortable work in such conditions. By the way, this is the case when buying an SSD can improve the situation, but it is much easier (and cheaper) to deal with the cause, not the consequences, and just buy the right amount of RAM.
The lack of RAM is the main reason why working with new 1C configurations is uncomfortable. Minimum suitable configurations should be considered with 2 GB of memory on board. At the same time, keep in mind that in our case "greenhouse" conditions were created: a clean system, only 1C and the task manager were launched. In real life, a browser, an office suite, an antivirus, etc., are usually open on a working computer, so proceed from the need for 500 MB per database plus some margin so that during heavy operations you do not run into a lack of memory and drastic performance degradation.
CPU
The central processing unit, without exaggeration, can be called the heart of the computer, since it is he who ultimately processes all the calculations. To evaluate its role, we ran another set of tests, the same as for RAM, reducing the number of cores available to the virtual machine from two to one, while the test was run twice with memory sizes of 1 GB and 2 GB.
The result turned out to be quite interesting and unexpected, a more powerful processor quite effectively took over the load in the face of a lack of resources, otherwise without giving any tangible benefits. 1C Enterprise (in file mode) can hardly be called an application that actively uses processor resources, rather undemanding. And in difficult conditions, the processor is burdened not so much by calculating the data of the application itself, but by servicing overhead costs: additional I/O operations, etc.
conclusions
So, why does 1C slow down? First of all, this is a lack of RAM, the main load in this case falls on the hard drive and processor. And if they do not shine with performance, as is usually the case in office configurations, then we get the situation described at the beginning of the article - the "two" worked fine, and the "three" shamelessly slows down.
The second place should be given to network performance, a slow 100 Mbps channel can become a real bottleneck, but at the same time, the thin client mode is able to maintain a fairly comfortable level of operation even on slow channels.
Then you should pay attention to the disk one, buying an SSD is unlikely to be a good investment, but replacing the disk with a more modern one will not be superfluous. The difference between generations of hard drives can be estimated from the following material: .
And finally the processor. A faster model, of course, will not be superfluous, but there is little point in increasing its performance, unless this PC is used for heavy operations: batch processing, heavy reports, month closing, etc.
We hope this material will help you quickly understand the question of "why 1C slows down" and solve it most effectively and at no extra cost.
Tags:
The 1C system occupies a dominant position in the automation market for small and medium-sized businesses. If a company has chosen a 1C accounting system, then usually almost all employees work in it, from ordinary specialists to management. Accordingly, the speed of the company's business processes depends on the speed of 1C. If 1C works at an unsatisfactory speed, then this directly affects the work of the entire company and profit.
Actually there is three methods of 1C acceleration:
- Increasing hardware capacity.
- Optimization of operating system and DBMS settings.
- Optimization of code and algorithms in 1C.
The first method requires the purchase of equipment and licenses, the third requires a lot of labor for programmers and, as a result, both ways result in significant financial costs. First of all, you need to pay attention to the program code, since no increase in server capacity can compensate for incorrect code. Any programmer knows that with just a few lines of code it is possible to create a process that will fully load the resources of any server.
If the company is confident in the optimality of the program code, and it is still running slowly, usually the management decides to increase the server capacity. At this point, a logical question arises: what is missing, how much and what needs to be added as a result.
The 1C company gives a rather vague answer to the question of how many resources are needed, we wrote about it earlier in our posts. And so you have to independently conduct experiments and figure out what the performance of 1C depends on. The performance experiments at EFSOL are described below.
When working with 1C 8.2, especially with configurations that use managed forms, a strange fact was noticed: 1C runs faster on a workstation than on a powerful server. Moreover, all the characteristics of the workstation are worse than those of the server.
Table 1 - Configurations on which initial testing was carried out
The workstation shows a performance of 155% more than a 1C server with superior performance. We began to figure out what was the matter and narrow the circle of searches.
Figure 1 - Performance measurements on the workstation by the Gilev test
The first suspicion was that Gilev's test was inadequate. Measurements of opening forms, posting documents, generating reports, etc. using instrumentation tools showed that the Gilev test gives an estimate proportional to the actual speed of work in 1C.
Number and frequency of RAM
An analysis of the information available on the Internet showed that many write about the dependence of 1C performance on the memory frequency. It is from the frequency, and not from the volume. We decided to test this hypothesis, since we have a RAM frequency of 1066 Mhz on the server versus 1333 Mhz on the workstation, and the amount of RAM on the server is already much higher. We decided to put not 1066 Mhz, but 800 Mhz right away in order to make the effect of the dependence of performance on the memory frequency more visible. The result - productivity fell by 12% and amounted to 39.37 units. We installed memory with a frequency of 1333 Mhz instead of 1066 Mhz on the server and got a slight increase in performance - about 11%. Productivity was 19.53 units. Accordingly, it's not about memory, although its frequency gives a small increase.
Figure 2 - Performance measurements on the workstation after lowering the frequency of the RAM
Figure 3 - Performance measurements on the server after increasing the frequency of RAM
Disk subsystem
The next hypothesis was related to the disk subsystem. Two hypotheses immediately arose:
- SSDs are better than SAS drives, even if they are in raid 10.
- iSCSI is slow or not working correctly.
Therefore, a regular SATA disk was installed in the workstation instead of an SSD, and the same was done with the server - the base was placed on a local SATA disk. As a result, performance measurements have not changed in any way. Most likely, this is happening, since there is enough RAM and the disks are practically not used in any way during the test.
CPU
The processors on the server, of course, are more powerful and there are two of them, but the frequency is slightly lower than on the workstation. We decided to check the effect of the processor frequency on performance: there were no processors with a higher frequency at hand for the server, so we lowered the processor frequency on the workstation. We immediately reduced it to 1.6 so that the correlation manifested itself brighter. The test showed that performance dropped significantly, but even with a 1.6 processor, the workstation produced almost 28 units, which is almost 1.5 times more than on the server.
Figure 4 - Performance measurements on a workstation with a 1.6 Ghz processor
video card
There is information on the Internet that a video card can affect the performance of 1C. We tried to use integrated workstation video, professional adapter Nvidia NVIDIA® Quadro® 4000 2 Gb DDR5, old GeForce graphics card 16MbSDR. During the Gilev test, no significant difference was noticed. Perhaps the video card still affects, but in real conditions, when you need to open managed forms, etc.
At the moment, there are two suspicions why the workstation runs faster even with noticeably worse performance:
- CPU. The type of processor on the workstation is better suited to 1C.
- Chipset. Other things being equal, our workstation has a newer chipset, which may be the reason.
We plan to purchase the necessary components and continue tests in order to finally find out what the performance of 1C depends on to a greater extent. While the process of approval and procurement is underway, we decided to perform optimization, especially since it costs nothing. The following steps have been identified:
Stage 1. System setup
First, let's make the following settings in the BIOS and operating system:
- In the server BIOS, disable all settings to save processor power.
- Select the "Maximum performance" plan in the operating system.
- The processor is also tuned for maximum performance. This can be done using the PowerSchemeEd utility.
Stage 2. Setting up the SQL server and the 1C:Enterprise server
We make the following changes to the settings of the DBMS server and 1C:Enterprise.
- Configuring the Shared Memory protocol:
- Shared Memory will be enabled only on the platform starting from 1C 8.2.17, on earlier releases Named Pipe will be enabled - somewhat inferior in speed. This technology only works if 1C and MSSQL services are installed on the same physical or virtual server.
- It is recommended to put the 1C service into debug mode, paradoxically this gives a performance boost. By default, debugging is disabled on the server.
- SQL server setup:
- We only need a server, the rest of the services that belong to it and, perhaps, someone uses them, only slow down the work. We stop and disable such services as: FullText Search (1C has its own full-text search mechanism), Integration Services, etc.
- Set the maximum amount of memory allocated to the server. This is necessary in order for the sql server to count on this amount and clean up the memory in advance.
- Set the maximum number of threads (Maximum worker threads) and set the increased server priority (Boost priority).
Stage 3. Setting up a working database
After the DBMS server and 1C:Enterprise are optimized, we proceed to the database settings. If the base has not yet been deployed from the .dt file, and you know its approximate size, then it is better to immediately indicate the initialization size of the primary file with “>=” of the base size, but this is a matter of taste, it will still grow when deployed. But Auto-increase in size must be specified: approximately 200 MB per database and 50 MB per log, because. default values - growth by 1MB and by 10% slow down the server very much, when it needs to increase the file with every 3rd transaction. It is also better to store the base file and the log file on different physical disks or RAID groups if a RAID array is used, and limit the growth of the log. It is recommended to move the Tempdb file to a high-speed array, since the DBMS accesses it quite often.
Stage 4. Setting up scheduled tasks
Scheduled tasks are created quite simply using the Maintenance Plan in the Management section, using graphical tools, so we will not describe in detail how this is done. Let's dwell on what operations need to be performed to improve performance.
- Indexes should be defragmented and statistics updated on a daily basis. if index fragmentation is > 25%, this will drastically reduce server performance.
- Defragmenting and updating statistics - done quickly and does not require disconnecting users. It is also recommended to do daily.
- Full reindexing - done with a database lock, it is recommended to do it at least once a week. Naturally, after a complete reindexing, indexes are defragmented and statistics are updated immediately.
As a result, with the help of fine-tuning the system, SQL server and working base, we managed to increase productivity by 46%. The measurements were carried out using the 1C instrument and using the Gilev test. The latter showed 25.6 units against 17.53 which were originally.
Brief conclusion
- The performance of 1C does not depend much on the frequency of the RAM. When a sufficient volume is reached, further memory expansion does not make sense, since it does not lead to an increase in performance.
- The performance of 1C does not depend on the video card.
- The performance of 1C does not depend on the disk subsystem, provided that the queue for reading or writing disks is not exceeded. If SATA drives are installed and they have not exceeded the queue, then installing an SSD will not improve performance.
- The performance is quite dependent on the frequency of the processor.
- With proper configuration of the operating system and MSSQL server, it is possible to achieve an increase in 1C performance by 40-50% without any material costs.
ATTENTION! Highly important point! All measurements were performed on a test base using the Gilev test and 1C instrumentation tools. The behavior of a real database with real users may differ from the results obtained. For example, in the test database, we did not find any dependence of performance on the video card and the amount of RAM. These conclusions are rather doubtful and in real conditions these factors can have a significant impact on performance. When working with configurations that use managed forms, a video card is important and a powerful graphics processor speeds up work in terms of drawing the program interface, visually this is manifested in faster 1C operation.
Is your 1C running slowly? Order IT maintenance of computers and servers by EFSOL specialists with many years of experience or transfer your 1C to a powerful and fault-tolerant 1C virtual server.
System integration. Consulting
We often get questions about what slows down 1s, especially when switching to version 1s 8.3, thanks to our colleagues from Interface LLC, we tell in detail:
In our previous publications, we have already touched on the impact of the performance of the disk subsystem on the speed of 1C, however, this study concerned the local use of the application on a separate PC or terminal server. At the same time, most small implementations involve working with a file base over a network, where one of the user's PCs is used as a server, or a dedicated file server based on a regular, most often also inexpensive, computer.
A small study of Russian-language resources on 1C showed that this issue is diligently bypassed; in case of problems, it is usually advised to switch to client-server or terminal mode. And it has also become almost generally accepted that configurations on a managed application work much slower than usual ones. As a rule, arguments are given "iron": "here Accounting 2.0 just flew, and the" troika "is barely moving", of course, there is truth in these words, so let's try to figure it out.
Resource consumption at a glance
Before starting this study, we set ourselves two goals: to find out if managed application-based configurations are actually slower than conventional configurations, and which resources have the highest impact on performance.
For testing, we took two virtual machines running Windows Server 2012 R2 and Windows 8.1, respectively, with 2 cores of the host Core i5-4670 and 2 GB of RAM, which corresponds to an average office machine. The server was placed on a RAID 0 array of two WD Se, and the client was placed on a similar array of general purpose disks.
As experimental bases, we have chosen several configurations of Accounting 2.0, release 2.0.64.12 , which was then updated to 3.0.38.52 , all configurations were run on the platform 8.3.5.1443 .
The first thing that attracts attention is the increased size of the information base of the Troika, and it has grown significantly, as well as much greater appetites for RAM:
We are already ready to hear the usual: "what did they add to this trio", but let's not rush. Unlike users of client-server versions, which require a more or less qualified administrator, users of file versions rarely think about database maintenance. Also, employees of specialized firms serving (read - updating) these bases rarely think about it.
Meanwhile, the 1C information base is a full-fledged DBMS of its own format, which also requires maintenance, and for this there is even a tool called Testing and fixing the infobase. Perhaps the name played a cruel joke, which seems to imply that this is a tool for troubleshooting, but poor performance is also a problem, and restructuring and reindexing, along with table compression, are well-known database optimization tools to any RDBMS administrator. Let's check?
After applying the selected actions, the database dramatically "lost weight", becoming even smaller than the "two", which no one has ever optimized either, and the RAM consumption also slightly decreased.
Subsequently, after loading new classifiers and directories, creating indices, etc. the size of the base will grow, in general, the bases of the "three" are larger than the bases of the "two". However, this is not more important, if the second version was content with 150-200 MB of RAM, then the new edition needs half a gigabyte already, and this value should be taken into account when planning the necessary resources to work with the program.
Net
Network bandwidth is one of the most important parameters for network applications, especially as 1C in file mode, moving significant amounts of data over the network. Most networks of small enterprises are built on the basis of inexpensive 100 Mbps equipment, so we started testing by comparing the performance indicators of 1C in 100 Mbps and 1 Gbps networks.
What happens when you start the 1C file base over the network? The client downloads a fairly large amount of information into temporary folders, especially if this is the first "cold" launch. At 100 Mbps, we expectedly run into the bandwidth and downloading can take a significant amount of time, in our case, about 40 seconds (the price of the graph division is 4 seconds).
The second launch is faster, since some of the data is stored in the cache and remains there until the reboot. The transition to a gigabit network can significantly speed up the loading of the program, both "cold" and "hot", and the ratio of values is observed. Therefore, we decided to express the result in relative terms, taking the largest value of each measurement as 100%:
As you can see from the graphs, Accounting 2.0 loads twice as fast at any network speed, the transition from 100 Mbps to 1 Gbps allows you to speed up the download time by four times. There is no difference between the optimized and non-optimized Troika databases in this mode.
We also checked the impact of network speed on heavy-duty operation, for example, during group re-hosting. The result is also expressed in relative terms:
Here it is already more interesting, the optimized base of the "troika" in a 100 Mbit / s network works at the same speed as the "two", and the unoptimized one shows twice the worse result. On a gigabit, the ratios are preserved, the non-optimized "three" is also twice as slow as the "two", and the optimized one lags behind by a third. Also, the transition to 1 Gb / s allows you to reduce the execution time by a factor of three for version 2.0 and two times for version 3.0.
In order to evaluate the impact of network speed on daily work, we used performance measurement by performing a sequence of predefined actions in each database.
Actually, for everyday tasks, network bandwidth is not a bottleneck, an unoptimized "three" is only 20% slower than a two, and after optimization it turns out to be about the same faster - the advantages of working in thin client mode affect. The transition to 1 Gb / s does not give the optimized base any advantages, and the non-optimized base and the deuce start to work faster, showing a small difference between them.
From the tests carried out, it becomes clear that the network is not a bottleneck for new configurations, and the managed application works even faster than usual. You can also recommend switching to 1 Gb/s if heavy tasks and database loading speed are critical for you, in other cases, new configurations allow you to work effectively even in slow 100 Mb/s networks.
So why does 1C slow down? We will investigate further.
Server disk subsystem and SSD
In the previous article, we achieved an increase in 1C performance by placing databases on SSD. Perhaps the performance of the server disk subsystem is not enough? We measured the performance of a disk server during a group run in two databases at once and got a rather optimistic result.
Despite the relatively high number of input / output operations per second (IOPS) - 913, the queue length did not exceed 1.84, which is a very good result for a two-disk array. Based on it, we can make an assumption that a mirror from ordinary disks will be enough for the normal operation of 8-10 network clients in heavy modes.
So is an SSD needed on a server? The best answer to this question will help testing, which we conducted using a similar methodology, the network connection is 1 Gb / s everywhere, the result is also expressed in relative values.
Let's start with the database loading speed.
It may seem surprising to someone, but the SSD base on the server does not affect the download speed of the database. The main limiting factor here, as shown by the previous test, is network throughput and client performance.
Let's move on to rewiring:
We have already noted above that the disk performance is quite enough even for heavy-duty operation, so the speed of the SSD is also not affected, except for the unoptimized base, which caught up with the optimized one on the SSD. Actually, this once again confirms that optimization operations organize information in the database, reducing the number of random I/O operations and increasing the speed of access to it.
On everyday tasks, the picture is similar:
Only the non-optimized base receives the benefit from the SSD. Of course, you can purchase an SSD, but it would be much better to think about the timely maintenance of the bases. Also, don't forget about defragmenting the infobase partition on the server.
Client disk subsystem and SSD
We analyzed the influence of SSD on the speed of locally installed 1C in the previous article, much of what has been said is also true for working in network mode. Indeed, 1C quite actively uses disk resources, including for background and scheduled tasks. In the figure below, you can see how Accounting 3.0 is quite actively accessing the disk for about 40 seconds after loading.
But at the same time, one should be aware that for a workstation where active work is performed with one or two information bases, the performance resources of a conventional HDD of a mass series are quite enough. Buying an SSD can speed up some processes, but you will not notice a radical acceleration in everyday work, since, for example, downloading will be limited by network bandwidth.
A slow hard drive can slow down some operations, but it cannot by itself cause a program to slow down.
RAM
Despite the fact that RAM is now obscenely cheap, many workstations continue to work with the amount of memory that was installed when they were purchased. This is where the first problems lie in wait. Based on the fact that the average "troika" requires about 500 MB of memory, we can assume that the total amount of RAM of 1 GB to work with the program will not be enough.
We reduced the system memory to 1 GB and launched two infobases.
At first glance, everything is not so bad, the program has moderated its appetites and completely kept within the available memory, but let's not forget that the need for operational data has not changed, so where did they go? Flushed to disk, cache, swap, etc., the essence of this operation is that data that is not needed at the moment is sent from fast RAM, the amount of which is not enough, to slow disk.
Where it leads? Let's see how the system resources are used in heavy operations, for example, let's start a group rerun in two databases at once. First on a system with 2 GB of RAM:
As you can see, the system actively uses the network to receive data and the processor to process them, disk activity is insignificant, in the process of processing it occasionally grows, but is not a limiting factor.
Now let's reduce the memory to 1 GB:
The situation is changing radically, the main load now falls on the hard disk, the processor and the network are idle, waiting for the system to read the necessary data from disk into memory and send unnecessary data there.
At the same time, even subjective work with two open databases on a system with 1 GB of memory turned out to be extremely uncomfortable, directories and magazines opened with a significant delay and active disk access. For example, opening the Sales of goods and services magazine took about 20 seconds and was accompanied by high disk activity all this time (highlighted by a red line).
In order to objectively assess the impact of RAM on the performance of configurations based on a managed application, we conducted three measurements: the loading speed of the first base, the loading speed of the second base, and group reposting in one of the bases. Both bases are completely identical and created by copying the optimized base. The result is expressed in relative units.
The result speaks for itself, if the loading time grows by about a third, which is still quite tolerable, then the time for performing operations in the database grows three times, there is no need to talk about any comfortable work in such conditions. By the way, this is the case when buying an SSD can improve the situation, but it is much easier (and cheaper) to deal with the cause, not the consequences, and just buy the right amount of RAM.
The lack of RAM is the main reason why working with new 1C configurations is uncomfortable. Minimum suitable configurations should be considered with 2 GB of memory on board. At the same time, keep in mind that in our case "greenhouse" conditions were created: a clean system, only 1C and the task manager were launched. In real life, a browser, an office suite, an antivirus, etc., are usually open on a working computer, so proceed from the need for 500 MB per database plus some margin so that during heavy operations you do not run into a lack of memory and drastic performance degradation.
CPU
The central processing unit, without exaggeration, can be called the heart of the computer, since it is he who ultimately processes all the calculations. To evaluate its role, we ran another set of tests, the same as for RAM, reducing the number of cores available to the virtual machine from two to one, while the test was run twice with memory sizes of 1 GB and 2 GB.
The result turned out to be quite interesting and unexpected, a more powerful processor quite effectively took over the load in the face of a lack of resources, otherwise without giving any tangible benefits. 1C Enterprise can hardly be called an application that actively uses processor resources, rather undemanding. And in difficult conditions, the processor is burdened not so much by calculating the data of the application itself, but by servicing overhead costs: additional I/O operations, etc.
conclusions
So, why does 1C slow down? First of all, this is a lack of RAM, the main load in this case falls on the hard drive and processor. And if they do not shine with performance, as is usually the case in office configurations, then we get the situation described at the beginning of the article - the "two" worked fine, and the "three" shamelessly slows down.
The second place should be given to network performance, a slow 100 Mbps channel can become a real bottleneck, but at the same time, the thin client mode is able to maintain a fairly comfortable level of operation even on slow channels.
Then you should pay attention to the disk one, buying an SSD is unlikely to be a good investment, but replacing the disk with a more modern one will not be superfluous. The difference between generations of hard drives can be estimated from the following material: An overview of two inexpensive Western Digital Blue series drives 500 GB and 1 TB.
And finally the processor. A faster model, of course, will not be superfluous, but there is little point in increasing its performance, unless this PC is used for heavy operations: batch processing, heavy reports, month closing, etc.
We hope this material will help you quickly understand the question of "why 1C slows down" and solve it most effectively and at no extra cost.
The phrase “1C slows down” must have been heard by everyone working with products on the 1C:Enterprise platform. Someone complained about it, someone accepted complaints. In this article, we will try to consider the most common causes of this problem and options for solving it.
Let's turn to the metaphor: before finding out why a person did not go somewhere, it is worth making sure that he has legs to walk. So, let's start with the hardware and network requirements.
If Windows 7 is installed:
If Windows 8 or 10 is installed:
Also remember that there must be at least 2 GB of free disk space, and a network connection must have a speed of at least 100 Mb / s.
It does not make much sense to consider the characteristics of servers in the client-server version, because in this case everything depends on the number of users and the specifics of the tasks that they solve in 1C.
When choosing a configuration for a server, keep the following in mind:
- One worker process of the 1C server consumes an average of 4 GB (not to be confused with a user connection, because one worker process can have as many connections as you specify in the server settings);
- The use of 1C and a DBMS (especially MS SQL) on the same physical server gives a gain when processing large data arrays (for example, closing a month, calculating a budget according to a model, etc.), but significantly reduces performance during unloaded operations (for example, creating and conducting implementation document, etc.);
- Remember that 1C servers and DBMS must be connected via a channel "thick" from 1 GB;
- Use high-performance disks and do not combine the roles of the 1C server and DBMS with other roles (for example, file, AD, domain controller, etc.).
If, after checking the equipment, 1C still “slows down”
We have a small company, 7 people, and 1C "slows down". We turned to specialists, and they said that only the client-server option would save us. But for us, such a solution is not acceptable, it is too expensive!
Carry out routine maintenance in the database*:
1. Start the database in configurator mode.
2. Select the "Administration" item in the main menu, and in it - "Testing and fixing".
3. Set all the checkboxes as in the picture. Click Run.
*This procedure may take from 15 minutes to an hour, depending on the size of the base and the characteristics of your PC.
If this does not help, then we make a client-server connection, but without additional investments in hardware and software:
1. Choose the least loaded computer in the office from among stationary (not notebook): it must have at least 4 GB of RAM and a network connection of at least 100 Mb / s.
2. Activate IIS (Internet Information Server) on it. For this:
3. Publish your database on this computer. There is available material on this topic on the ITS, or contact a support specialist.
4. On user computers, configure access to the database through a thin client. For this:
Open the launch window 1C.
Choose your working base. This is your base here. Click Edit. Set the switch to the "On the web server" position, specify in the line below it the name or IP address of the server on which IIS was activated, and the name under which the database was published. Press "Next".
Set the "Main Startup Mode" switch to "Thin Client" mode. Click "Finish".
We have a rather big company, but not very big either, 50-60 people. We use the client-server option, but 1C is terribly slow.
In this case, it is recommended to separate the 1C server and the DBMS server into two different servers. When separating, be sure to remember: if they remained on the same physical server, which was simply virtualized, then the disks of these servers must be different - physically different! Also be sure to set routine tasks on the DBMS server when it comes to MS SQL (more on this is described on the ITS website)
We have a rather big company, more than 100 users. Everything is set up in accordance with the 1C recommendations for this option, but when conducting some documents, 1C “slows down” very much, and sometimes a blocking error occurs. Maybe make a convolution of the base?
A similar situation arises due to the size of a very specific accumulation register or accounting (but more often - accumulation), due to the fact that the register is either not “closed” at all, i.e. there are income movements, but there are no expenditure movements, or the number of measurements by which the balances of the register are calculated is very large. There may even be a mix of the two previous reasons. How to determine which register spoils everything?
We fix the time when documents are being processed slowly, or the time and user who has a blocking error.
Open the registration log.
We find the document we need, at the right time, for the right user with the “Data.Conduct” event type.
We look at the entire transaction block until the moment the transaction was canceled, if there was a blocking error, or we look for the longest change (the time from the previous record is more than a minute).
After that, we make a decision, bearing in mind that it is cheaper to collapse this particular register in any case than the entire database.
We are a very large company, more than 1000 users, thousands of documents a day, our own IT department, a huge fleet of servers, we have optimized requests several times, but 1C slows down. We, apparently, have outgrown 1C, and we need something more powerful.
In the vast majority of such cases, it is not 1C that “slows down”, but the architecture of the solution used. When making a choice in favor of a new business program, remember that it is cheaper and easier to write your business processes in a program than to remake them for some, especially a very expensive program. Only 1C provides such an opportunity. Therefore, it is better to ask yourself: “How to fix the situation? How to make 1C "fly" on such volumes? Let's take a look at a few treatment options:
- Use the technologies of parallel and asynchronous programming that 1C supports (background tasks and requests in a loop).
- When designing the architecture of the solution, refuse to use accumulation registers and accounting registers in the most "narrow" places.
- When developing a data structure (accumulation and / or information registers), follow the rule: "The fastest table to write and read is a table with one column." What is at stake will become clearer if we look at a typical RAUS mechanism.
- To process large amounts of data, use auxiliary clusters where the same database is connected (but in no case should this be done when working interactively !!!). This will bypass standard 1C locks, which will make it possible to work with the database at almost the same speed as when working directly with SQL tools.
It is worth noting that 1C optimization for holdings and large companies is a topic for a separate, large article, so stay tuned for updates to the materials on our website.