## Abstract

This paper focuses on a time-varying distributed convex optimization problem for multi-agent systems, with the objective of preserving their desired rigid formation. The core challenge is driving all agents to minimize the global objective function cooperatively with limited information each agent can share. To tackle the dual objective problem, here we propose a single-integrator and a double-integrator dynamic systems to solve the distributed optimization problem with desired shape achievement. It is proved that all agents can achieve the rigid formation while optimizing the global objective function asymptotically in two algorithms by using the Lyapunov theorem. Finally, the good performance of the control algorithms is demonstrated through numerical experiments and computer simulations.

### Similar content being viewed by others

### Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.

## 1 Introduction

Recently, there have been extensive researches on the cooperative working of multi-agent in the field of automatic control due to their numerous applications in machine learning, distributed data regression, source localization, and other domains. Numerous important issues have been extensively studied, including the consensus problem [1,2,3,4,5], the path tracking problem [6,7,8,9], and the synchronous and asynchronous problems [10,11,12]. These researches have yielded substantial results and theoretical advancements.

Among the research on distribution control, distributed optimization has received considerable attention. Researchers have been keen on devising strategies which enable all agents to achieve their ideal states by using their own information and that of their neighbors. Early works like the single-integrator systems [13,14,15], and later distributed optimization with constraints were considered and time delayed problem in [16,17,18]. More recently, significant progress has been made in dealing with continuous-time multi-agent networks with convex constraints. It includes optimizing the global objective function with both inequality and equality constraints, examining second-order dynamic system, and taking into account graph properties like strong connectivity or balance [19, 20]. A distributed continuous-time optimization algorithm was proposed in [21] to minimize the global objective function with non-uniform gradient gains and non-convex input. The work presented in [22] focused on addressing the distributed second-order optimization problem specifically in the context of an unbalanced and directed network.

Moreover, in some practical applications, it has been observed that the optimal point for each agent dynamically changes over time based on certain laws. The conventional time-invariant objective function fails to adequately address the requirements of such application scenarios. Hence, the field of distributed optimization has emerged to tackle the challenges posed by time-varying objective functions. And numerous studies have been conducted regarding time-varying distributed optimization. For instance, in [23], single- and double-integrator systems are considered for solving distributed optimization concerning time-varying objective function, and some significant theories concerning the convergence are shown. [24] and [25] proposed distributed optimization algorithms under constraints. [26] presented a distributed algorithm that tackles the challenges of both unconstrained time-varying optimization problems and a specific type of constrained problem, known as the resource allocation problem. [27, 28] devised algorithms that established the convergence of all nodes to a common optimal solution for the optimization problem, provided that the time-varying graphs are uniformly jointly connected.

For another, formation control has been a persistent and widely discussed topic for decades, owing to its expansive applications in both civilian and military domains. Maintaining the desired geometric shape of a formation without relying on global information is a critical issue in this field. [29,30,31] were dedicated to designing gradient control laws that would asymptotically stabilize rigid, undirected formations in 2-D or 3-D space. Their findings revealed that in the presence of slight variations in the perception of the desired inter-agent distances among neighboring agents, the resulting trajectories with distortions exhibit rigid formation, and converge rapidly and exponentially to form a closed circular orbit. Viewing the distance-based formation control of agents with a double integrator dynamic system as a stabilization system, [32] and [33] described the control approach through the formulation of a time-dependent Lagrangian function. [34] systematically presented an overview of the current research landscape concerning formation control in multi-agent systems. The study categorized the bulk of research into four distinct groups: position-based, displacement-based, distance-based, and bearing-based approaches. In engineering, distributed formation control has extensive applications in Unmanned Surface Vehicles (USVs) and has played a crucial role in various areas, including undersea resource exploration, water pollution mitigation, disaster relief operations, maritime surveillance, and etc. In these practical scenarios, each individual agent must work cooperatively to achieve formation control by relying on the perceived distances and relative velocities of its neighbors, along with its own internal information. Such a control strategy is of significance in formation problem.

However, the research on distributed optimization has predominantly revolved around achieving distributed consensus, wherein all agents aim to reach a common state which minimizes the overall global objective function. In real-world scenarios, it is common for agents to maintain distinct states due to their inherent complexities, and in some cases, even remain in continuous motion. This implies that the pursuit of consensus may not be aligned with the specific requirements of the given scenario. Motivated by the previous discussion, we note that the optimal location with the desired formation structure is an important application in engineering. The main contribution of this article can be concluded into four aspects. Firstly, each agent’s objective function in our article is time-varying, which means that the optimal state may not remain fixed, but requires tracking the optimal trajectory, while the formation structure remains unchanged. Secondly, the optimization and formation control problem merely allows the use of local information, i.e. the individual agent’s own information as well as that of the neighboring agents. Thirdly, the objective is to minimize the global objective function while simultaneously maintaining the desired formation. Finally, this article aims to address the problem with both single-integrator and double-integrator dynamic systems.

Two new algorithms are developed in this paper. As their application, the algorithms are employed to make aerial vehicles to form a specific shape in designated area. The single-integrator dynamics system exhibits strong stability and requires minimal assumptions. And double-integrator dynamics system ensures that the aerial vehicles maintain a particular shape while moving at a uniform speed. The remaining sections of this article are structured as follows. Preliminaries of graph theory and problem formulation are introduced in Sect. 2. In Sects. 3, 4, two algorithms are devised, one for single-integrator dynamic systems and another for double-integrator dynamic systems, and their respective stability is shown. Finally in Sect. 5, several examples and practical applications are given for illustration.

**Notations:** The \(m \times n\) matrices and vectors in *n*-dimensional Euclidean space are denoted by \({\mathbb {R}}^{m \times n}\) and \(\mathbb {R}^{n}\) respectively. Let \(\mathbb {R}^+\) denote the set of real numbers that are strictly positive. Given \({{\varvec{P}}} \in \mathbb {R}^{m \times n}\), \(x \in \mathbb {R}^n\), the transpose of \({{\varvec{P}}}\) is defined as \({{\varvec{P}}}^T\). The one norm and two norm of vector *x* are denoted as \(\parallel x \parallel _1\) and \(\parallel x \parallel \), respectively. \(\nabla f(x)\) means the gradient of *f*(*x*), and \(\dot{x}(t)\) denotes the derivative of *x*(*t*). The symbol \(\otimes \) denotes the Kronecker product. \(D_x=diag\{x_1, \dots , x_N\}\), where each \(x_i\) is the *i*-th diagonal component. \({{\varvec{I}}}_N\) denotes the \(N \times N\) identity matrix, and \(|\mathcal {P}|\) denotes the Cardinality of a set \(\mathcal {P}\).

## 2 Preiminaries and problem statement

### 2.1 Graph theory

Consider an undirected graph \(\mathcal {G}=(\mathcal {V}, \mathcal {E})\) with *N* nodes, where \(\mathcal {V}=\{1, 2, \ldots , N\}\) denotes the set of nodes, and \(\mathcal {E} \subseteq { \mathcal {V} \times \mathcal {V}}\) denotes the set of edges. Then, by randomly selecting the orientations of these edges, we can further represent the elements of the incidence matrix \({{\varvec{B}}} \in \mathbb {R}^{N \times |\mathcal {E}|} \) by

where \(\mathcal {E}_k^{tail}\) and \(\mathcal {E}_k^{head}\) mean the tail node and head node respectively of the edge \(\mathcal {E}_k\), i.e., \(\mathcal {E}_k=(\mathcal {E}_k^{tail}, \mathcal {E}_k^{head})\). To simplify the notation, we use the substitution (*i*, *j*) in place of the original expression. We introduce \( \mathcal {N}_i \triangleq \{ j \in \mathcal {V}: (i, j) \in \mathcal {E}\}\) for the out-neighborhoods of node *i*, allowing each agent *i* to exchange information in a specific direction. It is obvious that \({{\varvec{1}}}_{N}^T {{\varvec{B}}}={\textbf {0}}\), where \({{\varvec{1}}}_{N} \in \mathbb {R}^N\) denotes the vector consisting of all ones in its entries.

### 2.2 Infinitesimally and minimally rigid

In this subsection, we will provide a brief overview of the concepts of infinitesimally rigid framework and minimally rigid framework. Before introducing these definitions, we need to define a rigidity function \(r_\mathcal {G}(x)\). Let \(x_i \in \mathbb {R}^m\), where *m* is the dimension of vector. The rigidity function \(r_\mathcal {G}(x):\,\mathbb {R}^{mN}\rightarrow \mathbb {R}^{|\mathcal {E}|}\) is given as

or we can also describe it by

Then we can obtain *z* as a stacked vector given by \(z=\left[ z_1^T,z_2^T,\ldots ,z_{|\mathcal {E}|}^T\right] ^T\in R^{m|\mathcal {E}|}\), with \(z_k=z_{k_{ij}} \in \mathbb {R}^m\) being the relative position vector related to the *k*-th edge, and it satisfies \(z=\overline{{{\varvec{B}}}}^Tx\), where \(\overline{{{\varvec{B}}}}={{\varvec{B}}}\otimes {{\varvec{I}}}_m\in \mathbb {R}^{mN\times m|\mathcal {E}|}\). In order to provide an accurate representation of rigidity property, we usually introduce a rigidity matrix \(R\in \mathbb {R}^{m|\mathcal {E}| \times m N}\), which is given as

Since the rigidity function can also be viewed as a function about *z*, i.e., \(r_\mathcal {G}(x)=g_\mathcal {G}(z)={1\over 2}\left[ \parallel z_1 \parallel ^2, \parallel z_2 \parallel ^2,\right. \left. \ldots ,\parallel z_{| \mathcal {E}| }\parallel ^2 \right] ^T\), the rigidity matrix

where \(D_z=diag\{z_1,z_2,\ldots , z_{|\mathcal {E}|}\}.\) The definitions of rigidity and infinitesimal rigidity for the framework are presented as follows.

### Definition 1

([35]) A framework \((\mathcal {G},x)\) with the graph \(\mathcal {G}\) and state *x* is considered rigid in \(\mathbb {R}^m\), if there exists a neighborhood \(\mathcal {U}\) of *x*, such that \(r_\mathcal {G}^{-1}(r_\mathcal {G}(x)) \cap \mathcal {U}=r_{\mathcal { \tilde{G}}}^{-1}(r_\mathcal { \tilde{G}}(x))\cap \mathcal {U}\), where \(\mathcal { \tilde{G}}\) denotes a complete graph that shares the same set of nodes as \(\mathcal {G}\).

### Definition 2

([35]) A framework \((\mathcal {G},x)\) is infinitesimally rigid in *m*-dimensional space if \(rank({{\varvec{R}}(x)})=mN-{1 \over 2 }m(m+1).\) Additionally, if the framework is infinitesimally rigid in \(\mathbb {R}^2\), (resp. \(\mathbb {R}^3\)) and has exactly \(2N-3\) (resp. \(3N-6\)) edges, then it is called a minimally and infinitesimally rigid framework on the 2-dimensional space (3-dimensional space).

### Lemma 1

([35]) If the framework \((\mathcal {G},x)\) is minimally and infinitesimally rigid in the *m*-dimensional space, then the matrix \({\textbf {R}}(x){\textbf {R}}(x)^T\) is positive definite, and its eigenvalues are continuous functions of its matrix elements.

### Lemma 2

([23]) For a continuously differentiable convex function *f*(*x*), the optimum of the function will be reached if there is a state \(x^*\), such that the gradient of the function *f*(*x*) at \(x^*\) equals zero, i.e., \(\nabla f(x^*) = {\textbf {0}}\).

### 2.3 Problem statement

Let \(f_i: \mathbb {R}^m \times \mathbb {R}^+ \rightarrow \mathbb {R}\) be a time-varying local objective function for agent *i*, the information of which is known only to the agent itself and its neighbors. Our research involves the pursuit of two primary objectives. The first one is to find possible optimal solutions of the time-varying global objective function, denoted by \(\sum \nolimits _{i=1}^{N}f_i(x_i, t)\). Simultaneously, we endeavor to shape all the agents into a desired formation, i.e., finding optimal solution \(x_i^*(t)\) for \(i\in \{1, \ldots N\}\) to solve the following problem.

where \(f_i(x_i, t)\) is the time-varying local objective function only known to agent *i* and its neighbors, \(d_{ij}\) represents the distance between \(x_i\) and \(x_j\), \(d_{ij}=d_{ji}\in \mathbb {R} \). For simplicity, we assume that each objective function is convex. In this article, our focal point lies in the utilization of Lyapunov technology for proving the convergence of our algorithms. To simplify the expression, we will occasionally use \(e_k\), \(z_k\), \(d_k\) and \(\mathcal {E}_k\) instead of \(e_{k_{ij}}\), \(z_{k_{ij}}\), \(d_{ij}\) and \(\mathcal {E}_{k_{ij}}\) if no confusion is expected. These changes allow the vectors to be indexed by *k*, where \(k \in \{1,2 \ldots , |\mathcal {E}|\}\). In the following, we will use \(e_k\), \(z_k\), \(d_k\) and \(\mathcal {E}_k\) when index terms are required, and \(e_{k_{ij}}\), \(z_{k_{ij}}\), \(d_{ij}\) and \(\mathcal {E}_{k_{ij}}\) for making it more understandable. For a USV formation problem, we can mainly control the distance to achieve the formation, and the objective function can effectively determine the optimal zone.

### Remark 1

Indeed, the desired shape formation for multi-agent systems involves arranging the agents’ positions to achieve a predetermined shape, and it often can be a polygon. The topology is chosen by connecting some of the agents, and it needs to be infinitesimally and minimally rigid. Our approach aims to attain a desired shape solely by regulating distances and designing objective functions that direct agents to specific areas. The number of constraints is tailored accordingly. So before applying the algorithm to form such a polygon, it is crucial to emphasize the rigidity properties of the problem, i.e., the distance constraints of the problem should possess infinitesimal and minimal rigidity. The structure of this graph must align with the problem’s requirements.

## 3 Time-varying convex optimization for single-integrator dynamics

In this section, we focus on a multi-agent system consisting of *N* agents, where the interactions among them are determined by an undirected graph \(\mathcal {G}\). Then we adopt single-integrator dynamic system as a means to address the problem (1) that revolves around the dual objectives, i.e., to achieve convergence of all agents towards a desired shape, while simultaneously minimizing the global objective function. The general single-integrator algorithm for distributed dynamic system is given by

where \(u_i\in \mathbb {R}^m\) is a control input. For simplicity, we omit the time index if there is no ambiguity. There should be an optimal state \(x_i^*(t)\) for each agent, where \(x_i^*(t)\) for \(i \in \{1, \ldots ,N\}\) are the optimal states of the problem (1). This approach ensures that all agents accurately follow the optimal states \(x_i^*(t)\) of each agent, \(z^*(t)\) also need to be in formations set \(\mathcal {D}\) given by

where \(z^*(t)=\overline{{{\varvec{B}}}}^Tx^*(t)\), \(x^*(t)=\left[ {x_1^*}^T(t),{x_2^*}^T(t),\ldots , \right. \left. {x_N^*}^T(t)\right] ^T\).

We observe that it falls under the category of distance-based formation control. There are various methods and algorithms enabling agents to achieve their objectives using distance-based control under disturbances. [36] presented an algorithm that integrates the benefits of both position-based and distance-based gradient descent control laws. Furthermore, the stability analysis of the error system inspires us to express \(z_i^Tz_j\) in terms of \(\parallel z_i \parallel ^2\) and \(\parallel z_j \parallel ^2\). [37] introduced a disturbance-resistant formation tracking controller specifically designed for underactuated vessels. They mainly introduced disturbance estimation. A disturbance observer was utilized to estimate the lumped disturbance, which comprises model uncertainties and environmental disturbances. And an affine formation tracking control incorporating disturbance estimation was proposed to counteract the effects of these disturbances. [38] mainly analyzed the impact of distance mismatches on the standard gradient-based rigid formation control for second-order agents. It explored how mismatches can be effectively used as design parameters to control the combined translational and rotational motion of the formation. In this paper, we will employ the gradient term to facilitate convergence towards the minimum of the global objective function.

Consequently, to ensure the existence of the solution and the convergence of the algorithm, it is necessary to establish the following assumptions.

### Assumption 1

A continuous solution \(x^*(t)\) exists for the optimization problem of minimizing the function \(\sum \nolimits _{i=1}^{N}f_i(x_i, t)\), and \(z^*(t)\) is contained in \(\mathcal {D}.\)

### Assumption 2

( [23]) Each local objective function \(f_i(x_i,t)\) is at least twice continuously differentiable with respect to \(x_i\), and the corresponding Hessian matrix \(H_i(x_i,t), \,\forall x_i, t\) is required to be invertible.

[39] explored formation control algorithm by using Gaussian Processes for learning unknown dynamics, and finally form the formation shape. [23] introduced a gradient term to achieve the minimization of the global objective function. In alignment with these approaches, our system is steered by a gradient term, directing it towards the most favorable region. In order to address the problem (1), we employ the following distributed algorithm

where \(\alpha >0\), \(e_{k_{ij}}=\parallel z_{k_{ij}} \parallel ^2-d_{ij}^2\), \(b_{ik_{ij}}\) represents the element of the \(i-\)th row and the \(k-\)th col of the incident matrix \({{\varvec{B}}}\), in which the edge \(\mathcal {E}_k\) connects agent *i* and agent *j*. \(\mathcal {E}_{k_{ij}}\) and \(\mathcal {E}_{k}\) are equivalent, and similarly, \(z_{{k}_{ij}}\) follows the same convention. And \(d_{ij}\) means the distance of edge \(\mathcal {E}_k\). \(\text {sgn}(\cdot )\) is the function determining the sign of its argument, i.e.,

For simplicity, we can convert \(\dot{x}_i\) in (2) into \(\dot{x}_i=-\alpha \sum \nolimits _{k=1}^{|\mathcal {E}|} b_{ik}z_{k}\text {sgn}(e_{k})+\phi _i\).

### Remark 2

The algorithm (2) consists of two components. The inclusion of the term \(\sum \nolimits _{\mathcal {E}_k\in \mathcal {E}}b_{ik}z_{k}\text {sgn}(e_{k})\) in the algorithm serves the purpose of ensuring that all agents maintain the desired shape and are steered towards the resulting formation set \(\mathcal {D}\). \(\phi _i\) is the gradient term, which plays a crucial role in minimizing the global objective function. If the gradient term is missing, the system would realize the desired shape at a random position.

### Assumption 3

( [23]) Considering the definition of \(\phi _i\) in (2), there exists \(\overline{\phi } >0\), such that \(\parallel \phi _i-\phi _j \parallel \le \overline{\phi }\) holds for all \( (i,j) \in \mathcal {E}.\)

Before demonstrating the convergence of the algorithm, we first show that each agent does not move excessively far away from the others under the control of (2). In other words, the distances between any pair of agents remain bounded throughout their interaction governed by the control in algorithm (2).

### Lemma 3

Assume that the formation with desired graph \(\mathcal {G}\) is infinitesimally and minimally rigid, and that Assumptions 1, 2 and 3 are satisfied, then there exists \(\alpha >0\), such that \(z_k\) for \(k\in \{1, \dots , |\mathcal {E}|\}\) are bounded under algorithm (2).

### Proof

The system (1) with control (2) can be reformulated in a more compact representation

where \(\overline{{{\varvec{B}}}}={{\varvec{B}}}\otimes {{\varvec{I}}}_m,\) \(\varPhi =\left[ \phi _1^T,\phi _2^T,\ldots ,\phi _N^T\right] ^T,\) \(e_x=\left[ e_1,e_2,\ldots ,e_{|\mathcal {E}|}\right] ^T,\) \(\text {sgn}(e_x)=\left[ \text {sgn}(e_1),\text {sgn}(e_2), \ldots ,\right. \left. \text {sgn}(e_{|\mathcal {E}|})\right] ^T.\) Then it follows from (3) that

Before proving the boundedness of *z*, we must add auxiliary edges to create a complete graph \(\mathcal {\tilde{G}}\). The edges of the complete graph are then collected and represented as the set \(\mathcal {\tilde{E}}\). We define the sets \(\mathcal {S}\) and \(\mathcal {{\hat{S}}}\) as containing all \(\pm z_k\) and \( \pm {z}_{\hat{k}}\), respectively, \(k\in \{1, \ldots , |\mathcal {E}|\}\), \(\hat{k}\in \{|\mathcal {E}|+1, \ldots , |\mathcal {\tilde{E}}|\}\), where the vector \( z_{\hat{k}}\) corresponds to auxiliary edge \(\mathcal {E}_{\hat{k}}\), \(\mathcal {\tilde{S}}= \mathcal {S} \cup \mathcal {\hat{S}}\). The set containing all neighbors of the edge \(\mathcal {E}_{k_{ij}}=(i, j)\) is denoted by \( \Omega _{k_{ij}}=\Omega _{k}=\{(i, p)| p\in \mathcal {N}_{i}\} \cup \{(p, i)| p\in \mathcal {N}_{i}\} \cup \{(q, j)|q\in \mathcal {N}_{j}\} \cup \{(j, q)|q\in \mathcal {N}_{j}\}-\{(i,j),(j,i)\}\), It is worth noting that in a directed graph, (*i*, *j*) is not equivalent to (*j*, *i*). So that every element in \(\Omega _{k_{ij}}\) is linked to \(\mathcal {E}_{k_{ij}}\). Letting \(W_k={1 \over 2}\parallel z_k \parallel ^2\), and \(W=\sum \nolimits _{k=1}^{|\mathcal {E}|}W_k\), then following from equation (4), we can get

It can be observed that there always exists a triangle comprising of edges \(\mathcal {E}_k\), \(\mathcal {E}_l\), and an edge \(\mathcal {E}_{<k,l>}\), where \(\mathcal {E}_{<k,l>} \in \tilde{\mathcal {E}}\) connects these two edges. Let \(z_{<k,l>}\) be a vector corresponds to edges \(\mathcal {E}_{<k,l>} \in \tilde{\mathcal {E}}\) in the complete graph \(\mathcal { \tilde{G}}\). If \(z_k=x_i-x_j\), then it corresponds to the edge (*i*, *j*), which means the exit node and entry node are *i*, *j* respectively. If \(b_{il}-b_{jl}=-1\), then it means \(b_{il}=0\) and \(b_{jl}=1\) or \(b_{il}=-1\) and \(b_{jl}=0\). For the former case, it means node *j* is corresponding to the exit node of \(\mathcal {E}_{l}\). And the latter means node *i* is corresponding to the entry node of \(\mathcal {E}_{l}\). Obviously, \(b_{il}-b_{jl}\) in the last equation of (5) can only take on values of 1, \(-1\), or 0. If \(b_{il}-b_{jl}\) equals 2, then it signifies that \((i,j)=\mathcal {E}_l\), and \(b_{il}-b_{jl}\) can’t be equal to \(-2\) since the structure of incident matrix and the orientation of edges. Let \(\theta _{kl} \) represent the angle formed by the two vectors \(z_k\) and \(z_l\). It is easy to find that the sign of the \(\text {cos}(\theta _{kl})\) depends on the direction of edges \(\mathcal {E}_k\) and \(\mathcal {E}_l\). If these two edges have the same exit node or the same entry node, then \(\text {cos}(\theta _{kl})\) is positive. In this case, \(z_k-z_l\in \mathcal { \tilde{S}}\). Conversely, and \(z_k+z_l\in \mathcal { \tilde{S}}\). Note that \((b_{il}-b_{jl})=0\) if \(\mathcal {E}_l \notin \Omega _k\), \((b_{il}-b_{jl})=1\) if \(z_k-z_l \in \mathcal { \tilde{S}}\), \((b_{il}-b_{jl})=-1\) if \(z_k+z_l \in \mathcal { \tilde{S}}\), then it follows from the last equation in (5) and the law of cosines, we obtain

Then, by computing the summation of the both sides of the last inequality in (6), we obtain

Considering the \(\sum \limits _{k=1}^{|\mathcal {E}|}\sum \limits _{\mathcal {E}_l\in \Omega _k} \parallel z_l \parallel ^2\text {sgn}(e_l)\) in (7) first. For the term *k*, the set \(\Omega _k\) collects all edges \(\mathcal {E}_l\) connected to \(\mathcal {E}_k\), \(l\in \{1,\ldots ,|\mathcal {E}|\}\) and \(\mathcal {E}_k \notin \Omega _k\), then for these terms *l*, \(\mathcal {E}_k\) must be in \(\Omega _l\). Then only the terms whose corresponding edges connected to edge *k* contain \(\parallel z_k \parallel ^2\text {sgn}(e_k)\). Therefore, since \(\sum \limits _{k=1}^{|\mathcal {E}|}\sum \limits _{\mathcal {E}_l\in \Omega _k} \parallel z_l \parallel ^2\text {sgn}(e_l)\) is the summation of all the term \(\parallel z_k \parallel ^2\text {sgn}(e_k)\), then for given *k*, collecting all the \(\parallel z_k \parallel ^2\text {sgn}(e_k)\) from the other expressions \(\sum \limits _{\mathcal {E}_l\in \Omega _{k'}} \parallel z_l \parallel ^2\text {sgn}(e_l)\), \(k'\in \{1,\ldots ,|\mathcal {E}|\}\) and \(k'\ne k\), we can conclude that the number of term \(\parallel z_k \parallel ^2\text {sgn}(e_k)\) is equal to \(|\Omega _k|\). i.e.,

Focus on the expression \( \sum \limits _{k=1}^{|\mathcal {E}|}\sum \limits _{\mathcal {E}_l\in \Omega _k}(\parallel z_k\parallel ^2- \parallel z_{<k,l>}\parallel ^2) \text {sgn}(e_l)\) in (7). Given that the graph \(\mathcal {\tilde{G}}\) is complete, for a selected *k*, if \(z_{<k,l>}\in \mathcal {S}\), it indicates that \(\mathcal {E}_{<k,l>}\in \mathcal {E}\) and must be connected to both \(\mathcal {E}_{k}\) and \(\mathcal {E}_{l}.\) It implies the existence of another terms \(k'\), such that \(\parallel z_{<k,l>}\parallel ^2 \text {sgn}(e_l)\) and \(-\parallel z_{k}\parallel ^2 \text {sgn}(e_l)\) are represented in the summation \(\sum \limits _{l'\in \Omega _{k'}}(\parallel z_{k'}\parallel ^2- \parallel z_{<k',l'>}\parallel ^2) \text {sgn}(e_{l'})\). Notably, at this stage, \(l'=l\), \( z_{<k',l'>}=z_k\), \(z_{k'}=z_{<k,l>}\). That is because for a triangle formed by edges \(\mathcal {E}_k\), \(\mathcal {E}_l\), and \(\mathcal {E}_{<k,l>}\), \(\text {cos}\theta _{kl}\) and \(\text {cos}\theta _{<k,l>l}\) both contain \(\parallel z_k\parallel ^2 \) and \(\parallel z_l\parallel ^2 \) and \(\parallel z_{<k,l>}\parallel ^2 \). When fixing edge \(\mathcal {E}_l\) and consider the situation where \(k'=<k,l>\), there exists \( (\parallel z_{k'}\parallel ^2- \parallel z_{<k',l>}\parallel ^2) \text {sgn}(e_l)\) in \( \sum \limits _{k=1}^{|\mathcal {E}|}\sum \limits _{\mathcal {E}_{l_1}\in \Omega _k}(\parallel z_k\parallel ^2- \parallel z_{<k,l_1>}\parallel ^2) \text {sgn}(e_{l_1})\). Since \(z_{<k',l>}= z_k\) in this triangle, the sum of \((\parallel z_{k}\parallel ^2- \parallel z_{<k,l>}\parallel ^2) \text {sgn}(e_l)\) and \((\parallel z_{k'}\parallel ^2- \parallel z_{<k',l>}\parallel ^2) \text {sgn}(e_l)\) is zero. And consequently, the ones which link to both \(z_k\in \mathcal {S}\) and \(z_{<k,l>} \in \hat{\mathcal {S}}\) are left.

To denote this set of edges, we introduce the notation \(\overline{\Omega }_k=\{\mathcal {E}_l| z_k\in \mathcal {S}, \forall z_{<k,l>}\in \hat{\mathcal {S}}\} \subseteq \mathcal {E}\). This set effectively captures the edges that establish a connection between the auxiliary edges and \(\mathcal {E}_k\). After merging similar terms, we get

Similarly, for \(\mathcal {E}_l\in \overline{\Omega }_k\), we know \(\mathcal {E}_k\in \overline{\Omega }_l\). Matching the two terms together and compute the summation, we get

Since

this translation can decompose the \(z_{<k,l>}\in \hat{\mathcal {S}}\) into two vectors corresponding to two edges in \({\mathcal {E}}\), which is convenient for operation. Therefore, we obtain

It is apparent that the matrix

is semi-positive definite, so it follows from (7), (8), (9) and (10) that

Now focus on the last term of the last inequality in (11), if \(1-\text {sgn}(e_k) = 2\) or \(1-\text {sgn}(e_k) = 1\), it indicates that \(\parallel z_k \parallel ^2 \le d_{k}^2\). Conversely, if \(\parallel z_k \parallel ^2 >d_{k}^2\), then \(1-\text {sgn}(e_k)=0\), so there exists

Let \(\Delta =\sum \limits _{k=1}^{|\mathcal {E}|}2\alpha (2+{1 \over 2}|\Omega _k|)d_{k}^2\), for a given connected graph, then \(\Delta \) is bounded. If there is \(\epsilon >0\), such that, \(\epsilon > \max \limits _{k}(\alpha (2+{1 \over 2}|\Omega _k|)-{1 \over 2}-{3\over 2}\alpha |\overline{\Omega }_k|) \), we can adjust \(\alpha \) such that \(\epsilon =\max \limits _{k}(\alpha (2+{1 \over 2}|\Omega _k|-{3\over 2}|\overline{\Omega }_k|))+{1 \over 2} \ge 0\) then, we can deduce that \(\dot{W}\le -\epsilon W+{1 \over 2}|\mathcal {E}|\overline{\phi }^2+\Delta \). Therefore, algorithm (2) can make *W* bounded, and thus, each \(z_k\) is also bounded. \(\square \)

### Remark 3

Here we give an example to present the construction of the graph in the proof of Lemma 3. If we have dimension \(d=2\) and consider the graph shown in Fig. 1, the solid lines in Fig. 1a represent the connected edges, while the dashed lines represent auxiliary edges. Figure 1b is a directed graph by assigning orientation randomly to form the incident matrix \(\varvec{B}\). And if we use the dimension \(d=3\) like regular tetrahedron, the construction follows a similar pattern. But since the graph is infinitesimally and minimally rigid, the incident matrix \(\varvec{B}\) has to be changed.

### Remark 4

The Lemma 3 provides a new way to claim that the state remains bounded under gradient perturbations. The usage of auxiliary edges serves as a means to discern why, under certain parameter values or beyond a specified threshold, the algorithm remains effective in preventing the divergence of distances between agents.

Let \(\kappa \) be an upper bound of \(\parallel D_z \parallel \), then we get a result as follows.

### Proposition 1

Suppose that the formation with desired graph \(\mathcal {G}\) is infinitesimally and minimally rigid, and that Assumptions 1, 2 and 3 are satisfied, there exists a parameter \(\alpha \ge 0\), such that the algorithm (2) for the problem (1) achieves the finite time convergence of the formation shape. Specifically, letting \(T_0={1 \over \alpha \min \limits _{k}{\lambda _k}-\kappa \overline{\phi }}\parallel e_x(0)\parallel _1\), if \(T\rightarrow T_0\), then \(\parallel x_i-x_j \parallel -d_{ij}\rightarrow 0, \forall (i,j)=\mathcal {E}_{k_{ij}}\in \mathcal {E}\).

### Proof

We can rewrite the first equality of algorithm (2) as

Then we construct a Lyapunov function as

It follows from the expression of \(V_1\) and equation (12) that

By applying Lemma 1, the matrix \(D_z^T\overline{{{\varvec{B}}}}^T\overline{{{\varvec{B}}}}D_z\) is positive define. From Lemma 3, we know each \(z_k\) is bounded, then it follows that \(e_x\) and \(V_1(e_x) \) are also bounded. Define a compact set \(\mathcal {H}=\{e_x: V_1(e_x)\le \gamma \}\), where \(\gamma \) is an upper bound of \(V_1(e_x)\). Since the eigenvalues of matrix \(D_z^T\overline{{{\varvec{B}}}}^T\overline{{{\varvec{B}}}}D_z\) can be expressed as continuous functions that are dependent on the matrix elements, there exists a minimum among these eigenvalues. Denote \(\lambda _k\) for \( k \in \{1,\ldots ,|\mathcal {E}|\}\) as the eigenvalues of \(D_z^T\overline{{{\varvec{B}}}}^T\overline{{{\varvec{B}}}}D_z\), so we have

where \(\overline{\lambda }\) denotes the minimum eigenvalue of \(D_z^T\overline{{{\varvec{B}}}}^T\overline{{{\varvec{B}}}}D_z\) within the compact set \(\mathcal {H}\). Selecting \(\alpha \) such that \(\alpha \ge {\kappa \overline{\phi } \over \overline{\lambda }}\), then it follows that \(\dot{V}_1 \le -\alpha \bar{\lambda } +k \bar{\phi }\) for \(e_x \ne 0\), Consequently, we have \(V_1(t) \le (-\alpha \bar{\lambda } +k \bar{\phi })t +V_1(0)\), and \(V_1\ge 0\), so when \(t\rightarrow T_0\), \(V_1\) will converge to 0. \(\square \)

### Theorem 1

Assume that the formation with desired graph \(\mathcal {G}\) is infinitesimally and minimally rigid. Assumptions 1, 2 and 3 are satisfied. If \(H_i(x_i,t)=H_j(x_j,t)\), \(\forall t\), \(\forall (i,j) \in \mathcal {E}\), the optimization objective can be effectively achieved and the desired shape is formed simultaneously by using the algorithm (2) for the problem (1).

### Proof

Define \(V_2\) as

Since the graph is connected, \({{\varvec{B}}}\) is the incidence matrix, for each \((i,j)\in \mathcal {E}\), there exist \(b_{ik}\) and \(b_{jk}\), such that \(b_{ik}+b_{jk}=0\). Then we get

As the Hessian matrices of the individual agents’ objective functions are identical, and then similar to the analysis in [23], by computing the derivative of \(V_2\), then

Therefore, by solving the equation (14), we have \(V_2=V(0)e^{-2t}\). It is implied that \(V_2\) converges asymptotically to 0 as \(t\rightarrow \infty \). The convergence of \(V_2\) approaching 0 ensures the convergence of the gradient sum \(\sum \limits _{i=1}^{N}\nabla f_i(x_i,t)\) towards \({\textbf {0}}\) as well. Since \(\sum \limits _{i=1}^{N} f_i(x_i,t)\) is convex, if \(\sum \limits _{i=1}^{N}\nabla f_i(x_i,t)= {\textbf {0}}\), then \(\sum \limits _{i=1}^{N} f_i(x_i,t)\) will converge to the minimum according to Lemma 2. Furthermore, by applying the Proposition 1, we know that agents will form the desired shape cooperatively in finite time. So as time goes to infinity, the dual objectives that achieving convergence to the minimum of the global objective function and attaining the desired shape will be accomplished simultaneously. \(\square \)

This approach focuses on achieving a desired shape through precise control of distances and the objective functions that steer agents towards designated zones. The number of constraints is designed to make sure the graph infinitesimally and minimally rigid, allowing it to fit the specific situation. So before implementing the algorithm to form the polygon, it is essential to make sure the rigidity characteristics of the problem.

### Remark 5

In traditional distribution optimization, all the states will terminally achieve global consensus while minimizing the global objective function, i.e.,

As a matter of fact, if we set \(d_{ij}=0\) in problem (1), our problem will degenerate into traditional distribution optimization. However, the graph in traditional one always required to be connected, while the graph in our problem is at least infinitesimally rigid and connected. The main purpose of the graph we set is to make it form the desired shape.

### Remark 6

If all agents’ objective functions are not time-varying, the problem (1) degenerates into the problem as

then the algorithm we propose also works, where the gradient term \(\phi _i\) should be changed into \(-H_i^{-1}\nabla f_i(x_i)\).

In real-world scenarios, it is necessary to dynamically scale, and have the desired shape formation to move along a certain trajectory. To address the scaling one, a feasible approach involving continuous desired distances \(d_{ij}\) is given in the following corollary.

### Corollary 1

If the desired distance \(d_{ij}(t)\) is differentiable and bounded, the derivative of \(d_{ij}(t)\) is also bounded for \(t>0\), then there exists \(\alpha >0\), such that algorithm (2) will make the agents form a changeable desired shape, and the global objective function will be minimized.

### Proof

If \(d_{ij}(t)\) is a continuous, differentiable and bounded, then \(\dot{e}_x=2D_z^T\dot{z}-2D_d^T\dot{d}\), where \(d=\left[ d_1,d_2,\ldots ,d_{|\mathcal {E}|}\right] ^T\). Denote the uniform upper bounds on \(\parallel d_{k}\parallel \) and \(\parallel \dot{d}_k \parallel \) as, respectively, \({\sigma }\), \({\delta }\) for \(\forall t, \forall k\). Therefore, for those \(e_x\ne 0\), one can obtain

If \(\alpha \ge {\overline{\lambda }}(\kappa \overline{\phi }+\sigma \delta )\), where \(\overline{\lambda }={1 \over \min \limits _{k} \lambda _k}\), similar to the analysis in Proposition 1 and Theorem 1, \(\dot{V}_1<0\) the dual objective will be achieved. \(\square \)

This boundedness property of \(d_{ij}\) and \(\dot{d}_{ij}\) ensures that the algorithm (2) will steadily and smoothly adjust the positions and the distance between the agents, ultimately leading the attainment of the desired shape.

Besides, if certain motion of formation is required, we can design specific objective functions. By considering the average state of all agents as the central state, the central state of the system will track the central control trajectory associated with the objective function and maintain the desired shape.

### Remark 7

In [40], distributed prescribed-time optimization problem was studied. They presented a cooperative hunting task with the goal of encircling an evader swarm composed of multiple agents in a distributed circumstance. In our research, we also successfully perform a hunting task that involves encircling an agent with a new approach. And our problem can be extended to time-varying function.

## 4 Time-varying convex optimization for double-integrator dynamics

In this section, our focus shifts to analyzing the double-integrator dynamic system. Single-integrator dynamic system enable one to focus on its stability and convergence, while ignoring the kinematics of each individual agent. In engineering systems, double-integrator dynamic system may be more realistic in applications. Double-integrator dynamic system is used to describe the kinematic and inertial properties of robots. And the control actions in the dynamic system can be perceived as expected accelerations within the guidance system. Compared to the single-integrator mentioned previously, the latter includes velocity control, enabling the manipulation of agent movements to form the desired shape within specific positions or zones. Additionally, the incorporation of velocity into the dynamic system enables the minimization of the global objective function. Therefore, an algorithm is proposed that enables direct control over the acceleration of each agent. The double-integrator algorithm can be represented as

where \(x, v, u\in \mathbb {R}^{mN}\) are the stacked vectors of the agents’ states \(x_i\in \mathbb {R}^{m}\), velocities \(v_i\in \mathbb {R}^{m}\) and control inputs \(u_i\in \mathbb {R}^{m}\) for \(i \in \{1,\ldots , N\}\). The control inputs are designed to control the acceleration of each agent’s dynamics. In [38], the authors proposed a double-integrator dynamic system by assigning a control law as

Furthermore, the system was extended to a new dynamic systems given by

where \(\lambda \in [0,1]\). Extensive analysis has established the convergence of the algorithm, demonstrating its remarkable ability to achieve local exponential convergence of *z*(*t*) towards the desired formation \(\mathcal {D}\) and the convergence of expected velocities *v*(*t*) towards \({\textbf {0}}\). Inspired by the concepts presented in this article, gradient term and a compensation term are introduced. Adding the auxiliary edges and forming a complete graph, our algorithm is presented as

Similarly, \(\dot{v_i}\) in (15) can be rewritten as \(\dot{v_i}=-\mu \sum \nolimits _{k=1}^{|\mathcal {E}|} b_{ik}(\text {sgn}(v_i)-\text {sgn}(v_j))- \sum \nolimits _{k=1}^{|\mathcal {E}|}b_{ik}z_{k}e_{k}+H_i(x_i,t)\)\([\nabla f_i(x_i,t)-{1\over N}\sum \nolimits _{l=1}^{N}\nabla f_l(x_l,t)]+\psi _i\).

### Remark 8

In the algorithm (15), \(-\mu \sum \limits _{k=1}^{|\mathcal {E}|} b_{ik}(\text {sgn}(v_i)-\text {sgn}(v_j))\) is a compensation term and \(\psi _i\) is a gradient term. These additional terms are critical in propelling the system to the optimal states. Additionally, we can convert the expression of algorithm (15) into a compact form,

where \({\varPsi } =[\psi _1^T, \psi _2^T,\ldots , \psi _{|\mathcal {E}|}^T]^T\), \(\varvec{\varGamma }=(I_{N}-{1 \over N} {{\varvec{1}}}_{N}{{\varvec{1}}}_{N}^T) \otimes {{\varvec{I}}}_m\). \(\nabla F=[\nabla f_1(x_1,t)^T,\nabla f_2(x_2,t)^T,\ldots ,\)\(\nabla f_N(x_N,t)^T]^T\). \(x=[x_1^T,x_2^T,\ldots ,x_N^T]^T\) and \(v=[v_1^T,v_2^T,\ldots ,v_N^T]^T\).

We need to show that the algorithm (15) converges to the equilibrium state, denoted as \((x^*(t),v^*(t))\), where \(x^*=[{x_1^*}^T,\ldots ,{x_N^*}^T]^T\in \mathbb {R}^{mN}\), \(v^*=[{v_1^*}^T,\ldots ,{v_N^*}^T]^T\in \mathbb {R}^{mN}\). With the desired distances \(d_{ij}\), and the desired final common velocity \(\hat{v}\), here we introduce a set \(\mathcal {D}_2\) by

To achieve our goals, two main tasks remain to be solved. Firstly, we need to demonstrate that the gradient of global objective function at the equilibrium state is equal to \({\textbf {0}}\). Secondly, we need to show \(e_{x^*}={\textbf {0}}\) as well as \(v_i^*=v_j^*\), \((i,j)=\mathcal {E}_{k_{ij}}\in \mathcal {E}\), i.e., \([{x^*}^T,{v^*}^T]^T \in \mathcal {D}_2\).

### Assumption 4

There exists a constant \(\overline{\psi } \ge 0\), such that \(\psi _i\) given in (15) satisfy \(\parallel \psi _i-\psi _j+H_i\nabla f_i(x_i,t)-H_j\nabla f_i(x_j,t)\parallel \le \overline{\psi }\),\(\,\,\forall (i,j)\in \mathcal {E}\). And for \(\forall (i,j)\in \mathcal {E}\), \(\parallel \dot{\psi }_i-\dot{\psi }_j\parallel \) has an upper bound.

### Proposition 2

Assume that the desired graph \(\mathcal {G}\) for the formation is complete and Assumptions 1, 2 and 4 hold. For any initial conditions, if \(H_i(x_i,t)=H_j(x_j,t)\), \(\forall t\), \(\forall (i,j)\in \mathcal {E}\) as time \(t \rightarrow \infty \), \(x_i\) for \(i \in \{1,\ldots , N\}\) in algorithm (15) are convergent to optimal solutions \(x_i^*\) in (15) for the problem (1).

### Proof

Construct the following Lyapunov function

where \(S_i=H_i^{-1}(x_i,t)\left( {\partial \over \partial t} \nabla f_i(x_i,t)+\nabla f_i(x_i,t)\right) \). Since \(\sum \nolimits _{i=1}^{N}\dot{v}_i=\sum \nolimits _{i=1}^{N}\psi _i \). Then follow a similar approach as in [23],

Since \(\psi _i+H_i \nabla f(x_i,t)-\dot{S}_i={\textbf {0}}\), then

Thus, when \(\sum \nolimits _{i=1}^{N}\nabla f_i(x_i,t) \ne {\textbf {0}}\), it follows that \(\dot{V}_3 <0\) and \(0\le V_3\le V_3(0)\). It can be inferred that \(\sum \nolimits _{i=1}^{N}\nabla f_i(x_i,t)\) and \(\sum \nolimits _{i=1}^{N}v_i-\sum \nolimits _{i=1}^{N}S_i\in \mathcal {L}_\infty \). By performing the integration on both sides of equation (17), it follows that \(\sum \nolimits _{i=1}^{N}\nabla f_i(x_i,t) \in \mathcal {L}_2\). By employing Barbalat’s Lemma, \(\sum \nolimits _{i=1}^{N}\nabla f_i(x_i,t)\) will converge to \({\textbf {0}}\) asymptotically as \(t \rightarrow \infty \). Since \(\sum \nolimits _{i=1}^{N} f_i(x_i,t)\) is convex, if \(\sum \nolimits _{i=1}^{N}\nabla f_i(x_i,t)= {\textbf {0}}\), then \(\sum \nolimits _{i=1}^{N} f_i(x_i,t)\) will converge to the minimum. \(\square \)

### Theorem 2

Given Assumptions 1, 2 and 4, and desired graph \(\mathcal {G}\) for the formation is complete, if there exists \(\varvec{\varGamma } (\varPsi +D_H\nabla F)\rightarrow {\textbf {0}}\) as time \(t \rightarrow \infty \) for any initial conditions, and \(\mu \ge {\overline{\psi } \over 2}(N-1)\), then all agents will ultimately form the desired shape. Additionally, it ensures that the velocities of connected agents are identical, i.e., \([x_i,v_i]\in \mathcal {D}_2\) when \(t \rightarrow \infty \). Simultaneously, the optimization objective is achieved.

### Proof

Let \(\overline{\delta }\in \mathbb {R}^m\) denote the average velocity of all agents, i.e., \(\overline{\delta }={1 \over N}\sum \nolimits _{i=1}^{N}v_i\). It is obvious that the time derivative of \(\overline{\delta }\) is \(\dot{\overline{\delta }}={1 \over N}\sum \nolimits _{i=1}^{N}\psi _i\). Define \(\delta \) as \(\delta =[\delta _1^T,\delta _2^T, \ldots \delta _N^T]^T\), where \(\delta _i=v_i-\overline{\delta }\), \(\delta _i\) represents the difference between the velocity of the agent and the average velocity. By substituting the definition of \(\delta \), we obtain \(\dot{\delta }=\dot{v}-{1 \over N}{{\varvec{1}}}_{N} \otimes \sum \nolimits _{i=1}^{N}\psi _i\) and \(v_i-v_j=\delta _i-\delta _j\). Consequently, the systems (16) can be rewritten as

Construct a Lyapunov function \(V_4={1 \over 2}\delta ^T\delta +{1\over 4}e_x^Te_x\), then we get

If \(\mu \ge {\overline{\psi } \over 2}(N-1)\), we have \(\dot{V}_4\le 0\), and then \( 0\le V_4(e_x(t),\delta (t)) \le V_4(e_x(0),\delta (0))\), which indicates that \(e_x(t)\) and \(\delta (t)\) are bounded. Consequently, we can deduce that \(\dot{V_4}\) is also bounded. Nonetheless, as the system is not autonomous, the application of LaSalle’s Invariance Principle is not suitable. By Assumption 4, one has that the second derivative of \(V_4\), \(\ddot{V}_4=-\mu \sum \nolimits _{i=1}^{N}\sum \nolimits _{j\in \mathcal {N}_i} \text {sgn}(\delta _i-\delta _j)^T(\dot{\delta _i}-\dot{\delta _j})+{1 \over 2N}\sum \nolimits _{i=1}^{N}\sum \nolimits _{j=1}^{N}((\delta _i-\delta _j)^T(\dot{\psi _i} -\dot{\psi _j})+(\psi _i-\psi _j)^T(\dot{\delta _i}-\dot{\delta _i})) \) exists and is bounded. Then, by employing Barbalat’s Lemma [41], \(\parallel \delta _i-\delta _j\parallel _1\rightarrow 0\) for \(\forall (i,j)\in \mathcal {E}\). Considering the graph is complete, we can deduce \(\delta _1=\delta _2=\ldots =\delta _N\). By combining the equation \(\sum \nolimits _{i=1}^{N}\delta _i={\textbf {0}}\), it follows that \(\delta _1=\delta _2=\cdots =\delta _N={\textbf {0}}\). Therefore, the velocity alignment is achieved, i.e., \(v_1=v_2=\ldots =v_N\). The attainment of velocity alignment among agents occurs through this convergent property.

If there exists \(\varvec{\varGamma } (\varPsi +D_H\nabla F)\rightarrow {\textbf {0}}\), then \(\delta _i={\textbf {0}}\) implies that \(\dot{\delta }= \overline{{{\varvec{B}}}}D_ze_x+\varvec{\varGamma } (\varPsi +D_H\nabla F)={\textbf {0}}\), and \(\overline{{{\varvec{B}}}}D_ze_x \rightarrow {\textbf {0}}\) as \(t \rightarrow \infty \). Considering the full column rank of \(\overline{{{\varvec{B}}}}D_z\) resulting from the minimum and infinitesimal rigidity of the graph, one can infer that \(e_x \rightarrow {\textbf {0}}\) as \(t \rightarrow \infty \). It indicates that the error between the desired distance and the actual distance of the multi-agent system tends toward zero, and the state and velocity vectors converge towards the largest invariant set \(\mathcal {D}_2.\) Furthermore, by applying the Proposition 2, \(\sum \nolimits _{i=1}^{N} f_i(x_i,t)\) will converge to the minimum. So as time goes to infinity, the dual objectives that achieving convergence to the minimum of the objective function and attaining the desired shape will be accomplished simultaneously. \(\square \)

### Remark 9

The condition in Theorem 2 is available for special objective function:

where *a* is a constant, and *b*(*t*) is a function depending on *t*. Obviously, \(\nabla f_i(x_i,t)=2a (x_i+b(t))\), \({\partial \nabla f_i(x_i,t) \over \partial t}=2a\dot{b}\), \({d \over dt} \nabla f_i(x_i,t) =2a(v_i+\dot{b})\), \( {\partial \over \partial t}{d \over dt} \nabla f_i(x_i,t) =2a\ddot{b},\) then \(\psi _i+H_i\nabla f_i(x_i,t)=-2aH_i^{-1}(x_i,t)(v_i+\dot{b}+\ddot{b})\) and \(\varvec{\varGamma }(\varPsi +D_H\nabla F)=-2a{D_H}^{-1}\delta \). According to the analysis in Theorem 2, if \(\delta ={\textbf {0}}\), then \(e_x={\textbf {0}}\) also. It can be concluded that the double-integrator dynamic system for the problem (1) requires more rigorous conditions compared to the proposed single-integrator dynamic system. The demand for the complete graph will increase the communication cost. Additionally, according to the Assumption 4, the global objective function must be at least third derivative with respect to *t*.

### Remark 10

The existing theoretical results, as presented in [38, 42], provided formation controls via single-integrator and double-integrator dynamic systems, and studied the movement guidance of the rigid formations. The approaches employed in these researches involved manipulating the steady-state motion of the formation with gradient distance-based formation control techniques. These approaches enable the simultaneous integration of constant translation and angular velocities while also enabling controlled adjustment of the rigid structure’s scale. However, the motion parameters \(\mu _k\) and \(\tilde{\mu _k}\) chosen in their works should satisfy \(\overline{{\varvec{B}}}[ \bar{{\varvec{S}}}_1D_{\hat{z}} \,\, \bar{{\varvec{S}}}_2D_{\hat{z}}] [\mu ^T, \tilde{\mu }^T]^T={\textbf {0}}\) and \(D_{\hat{z}}\overline{{\varvec{B}}}^T[\bar{\varvec{S}}_1D_{\hat{z}} \,\, \bar{\varvec{S}}_2D_{\hat{z}}] [\mu ^T, \tilde{\mu }^T]^T={\textbf {0}}\), \(\varvec{A}(\mu ,\tilde{\mu })\hat{z}=[ \bar{\varvec{S}}_1D_{\hat{z}} \,\, \bar{\varvec{S}}_2D_{\hat{z}}] [\mu ^T, \tilde{\mu }^T]^T\), where \(\hat{z}\) is defined as \({z \over \parallel z \parallel }\), \(\mu _k,\tilde{\mu }_k\) are two constant mismatch parameters between the desired \(d_{ij}\) and \(d_{ji}\), \(\mu \), \(\tilde{\mu }\) are stacked vectors of \(\mu _k\) and \(\tilde{\mu }_k\), respectively. \({\varvec{S}}_1\) is constructed from the incidence matrix \({\varvec{B}}\) by setting its \(-1\) elements to 0, and \({\varvec{S}}_2={\varvec{S}}_1-{\varvec{B}}\). And the parameters \(\mu \), \(\hat{\mu }\) would affect the rotation velocity and the velocity. In practice, when aiming to achieve the desired shape, expected trajectory and specific positions, the process of adjusting these motion parameters, including translational velocity, rotational velocity, and the velocity responsible for scaling, can be laborious and intricate. However, it is possible to achieve relevant motion with the time-varying objective function. In our article, we skillfully address these movements by introducing the time-varying function.

### Remark 11

Reference [43] proposed a time-varying optimization-based framework for distributed formation control problems. And it developed distributed time-varying optimization algorithms for Euler-Lagrange systems with equality and inequality constraints. One of the key steps was the design of expected formation function. And a rotating pentagon and decagon formation were considered in [43] and [44]. However, in practical application, the formations may be more complex. In this paper, the formations selected are more extensive. We don’t need to design the expected formation function, but to design the expected distance between agents.

## 5 Numerical simulation

### Example 1

First, consider 6 agents system, our purpose is to form the regular hexagon while minimizing the global objective function in 2-dimensional plane, and discuss the best solution set. The connected graph is depicted in Fig. 2 for algorithm (2), and complete graph for algorithm (15). Each agent’s objective function is chosen as: \(f_i(x_i)=({x_i}_X)^2+({x_i}_Y-1)^2,\) where \({x_i}_X\) and \({x_i}_Y\) denote the Cartesian coordinates of agent *i*, and the incidence matrix

It is apparent that the graph given for the single-integrator dynamic system is minimally and infinitesimally rigid according to Definition 2. We set \(x_1(0)=[8.1428,2.4352]^T\), \(x_2(0)=[-9.2926,3.4998]^T\), \(x_3(0) =[-1.9660,-2.5108]^T\), \(x_4(0)=[6.1604,-4.7329]^T\), \(x_5(0)=[-7.0332,-16.6166]^T\), \(x_6(0)=[-5.8526, -5.4972]^T\), all of these initial points are chosen randomly. “\(*\)” and “\(\text {o}\)” means the initial and final points in Figs. 3 and 4, respectively. The evolution of agents’ positions via single-integrator algorithm (2) and second-integrator algorithm (15) are depicted in Figs. 3 and 4, respectively. Figures 5 and 6 exhibit the distances errors via single-integrator algorithm (2) and second-integrator algorithm (15), indicating that all edge errors tend to zero. Figure 7 demonstrates the behavior of the global objective functions and their gradient functions. The observed trend in Fig. 7 reveals that the gradient function converges to zero and the global objective function decreases to the optimal value via these two algorithms. These figures also demonstrate that both algorithms (2) and (15) can effectively reduce errors in distances and gradients, and ultimately lead the global objective function decreasing towards its optimal value.

Furthermore, despite the differences in the final positions of the agents between algorithm (2) and algorithm (15), the minimum value of the global objective function remains unchanged. This observation suggests that both algorithms are successful in achieving the desired formation while minimizing the global objective function.

### Example 2

Consider the graph depicted in Fig. 1, and \(f_i(x_i,t)=({x_i}_X-\text {sin}(t))^2+({x_i}_Y-\text {cos}(t))^2\) for each agent. Figures 8 and 9 illustrate that the agents not only follow the center \((\text {sin}(t),\text {cos}(t))\) but also maintain the desired shape through algorithm (2) and algorithm (15). It is noteworthy that in algorithm (15), each agent’s velocity moves towards the same trajectory depicted in Fig. 10, which further corroborates the effectiveness of the Theorem 2.

### Example 3

Optimization algorithms for the desired shape also have advantages in the path-tracking problem. By designating a particular route as a central route, we can have all the agents track this route without alter the formation via the algorithm (2). In this regard, assume that the square formation is required, the expected distances of neighbors are \(d_{12}=d_{23}=d_{34}=d_{41}=2\), \(d_{24}=2\sqrt{2}\), with time-varying function \(f_i(x_i,t)=({x_i}_X-t)^2+({x_i}_Y+t^2-4t)^2\) for each agent. These functions capture the displacement of the designed route in both the *x*-axis and *y*-axis. Specifically, the displacement of the designed route in the *x*-axis is represented by \(x(t)=t\), while the displacement in the *y*-axis is given by \(y(t)=-t^2+4t\), then the route function can be determined as \(y=-x^2+4x\). Figure 11 clearly illustrates how the system effectively tracks the center route \((t,-t^2+4t)\) while maintaining the desired shape.

### Example 4

Consider a system of 4 agents based on single- integrator dynamics that forms a regular tetrahedron while minimizing the global objective function in 3-dimensional space. Based on the infinitesimally and minimally rigidity of 4 nodes, the incident matrix could be

Each agent’s objective function is chosen as: \(f_i(x_i)=({x_i}_X-1)^2+({x_i}_Y-1)^2+({x_i}_Z-1)^2,\) where \({x_i}_X\), \({x_i}_Y\) and \({x_i}_Z\) denote the Cartesian coordinates of agent *i*. The distance between adjacent agents is set to \(d_{ij}=1,\) \(\forall (i,j)\in \mathcal {E}\). The initial states are selected randomly. Figure 12 illustrates four agents that are arranged to form a regular tetrahedron under single- integrator algorithm. Meanwhile, Figs. 13 and 14 depict edges errors and the gradient of the global objective function, respectively. It validates that algorithm (2) is effective in 3-dimensional space.

### Remark 12

In [44, 45], their focus was primarily on investigating tracking problems in time-varying linear multi-agent systems with multiple leaders. However, the optimal performance of each agent was not involved in, and the center of the formation was always set to be the origin. In our article, we consider the optimal situation, and the center of formation can be controlled at arbitrary position.

## 6 Conclusion

This paper mainly discussed a time-varying distributed optimization problem with desired shape constraint. And we used two types of multi-agent dynamic systems, the single-integrator dynamic systems and double-integrator dynamic systems, to track the solution of this problem. Each agent is capable of accessing its local objective function and the information from neighboring agents to achieve individual optimization while simultaneously achieving the dual objectives. In the case of single-integrator dynamic system, we first prove the boundedness of each \(z_i\), which means the distances between the neighbors are upper bounded. Subsequently, the dual objectives are achieved under some conditions. Additionally, a corollary regarding the variations of desired distances is presented to enhance the completeness of the results. Then, in the case of double-integrator dynamic system, our focus shifts towards extending the algorithm to improve its usability in real-world scenarios. We have proven that the formation behaviors maintain a consistent velocity for each individual agent and meanwhile, the dual objectives would be achieved. The theoretical results are mainly proved by using the Lyapunov theorem and Barbalat’s Lemma. Finally, in order to validate the system’s performance, some typical simulations are provided. It is worth mentioning that previous research, as reported in [46], has highlighted that a distorted shape can arise in the steady-state collective motion if there are at least two neighboring agents with inconsistent prescribed distances to maintain. Building upon this understanding, future studies will explore the behavior of the distributed optimization algorithm with desired shape when the desired inter-agent distances are not consistent.

## Data availibility

Data availability All data generated or analyzed during this study are included in this published article.

## References

He, W., Guo, H., Qian, F.: Scaled consensus of second-order multiagent systems via distributed adaptive control. Int. J. Robust Nonlinear Control

**31**(9), 4247–4261 (2021)Shi, X., Cao, J., Huang, W.: Distributed parametric consensus optimization with an application to model predictive consensus problem. IEEE Trans. Cybern.

**48**(7), 2024–2035 (2018)Sadeghi, M., Kalantar, M.: Fully decentralized multi-agent coordination scheme in smart distribution restoration: multilevel consensus. Appl. Energy

**350**, 121606 (2023)Liu, Y., Xia, Z., Gui, W.: Multi-objective distributed optimization via a predefined-time multi-agent approach. IEEE Trans. Autom. Control

**68**, 6998–7005 (2023)Xia, Z., Liu, Y., Wang, D., Gui, W.: Modified graph systems for distributed optimization. Sci. China Inf. Sci.

**66**(12), 222202 (2023)Hong, H., Wen, G., Yu, X., Yu, W.: Fixed-time cooperative tracking control for double-integrator multiagent systems: a time-based generator approach. IEEE Trans. Syst.

**53**(9), 5970–5983 (2023)Hong, H., Wen, G., Yu, X., Yu, W.: Robust distributed average for disturbed second-order multiagent systems. IEEE Trans. Syst. Man Cybern. Syst.

**52**(5), 3187–3199 (2021)Jiang, W., Xi, T., Zhang, W., Wang, Y.: Fully distributed formation tracking for high-order nonlinear multi-agent systems with heterogeneous uncertainties. Int. J. Control Autom. Syst.

**20**(12), 3859–3871 (2022)Huang, Z., Marelli, D., Xu, Y., Fu, M.: Distributed target tracking using maximum likelihood Kalman filter with non-linear measurements. IEEE Sens. J.

**21**(24), 27818–27826 (2021)Xu, L., Guo, Q., He, G., Sun, H.: The impact of synchronous distributed control period on inverter-based cyber-physical microgrids stability with time delay. Appl. Energy

**301**, 117440 (2021)Jiang, X., Zeng, X., Sun, J., Chen, J.: Distributed synchronous and asynchronous algorithms for semidefinite programming with diagonal constraints. IEEE Trans. Autom. Control

**68**(2), 1007–1022 (2023)Guo, Q., Chen, Z., Shi, Y., Yan, Y., Guo, F.: Synchronous control of multiple electrohydraulic actuators under distributed switching topologies with lumped uncertainty. J. Frankl. Inst.

**359**(9), 4288–4306 (2022)Lu, J., Tang, C.Y.: Zero-gradient-sum algorithms for distributed convex optimization: the continuous-time case. IEEE Trans. Autom. Control

**57**(9), 2348–2354 (2012)Lin, P., Ren, W., Farrell, J.A.: Distributed continuous-time optimization: nonuniform gradient gains, finite-time convergence, and convex constraint set. IEEE Trans. Autom. Control

**62**(5), 2239–2253 (2017)Yuan, K., Ling, Q., Yin, W.: On the convergence of decentralized gradient descent. SIAM J. Optim.

**26**(3), 1835–1854 (2016)Zhu, M., Martínez, S.: On distributed convex optimization under inequality and equality constraints. IEEE Trans. Autom. Control

**57**(1), 151–164 (2011)Yuan, D., Xu, S., Zhao, H.: Distributed primal-dual subgradient method for multiagent optimization via consensus algorithms. IEEE Trans. Syst. Man Cybern. B Cybern.

**41**(6), 1715–1724 (2011)Wang, D., Wang, W.: Necessary and sufficient conditions for containment control of multi-agent systems with time delay. Automatica

**103**, 418–423 (2019)Xia, Z., Liu, Y., Wang, J., Wang, J.: Two-timescale recurrent neural networks for distributed minimax optimization. Neural Netw.

**41**(6), 1715–1724 (2023)Mo, L., Hu, H., Yu, Y., Ren, G.: Distributed optimization without boundedness of gradients for second-order multi-agent systems over unbalanced network. Inf. Sci.

**565**, 177–195 (2021)Mo, L., Yu, Y., Zhao, L., Cao, X.: Distributed continuous-time optimization of second-order multiagent systems with nonconvex input constraints. IEEE Trans. Syst. Man Cybern. Syst.

**51**(10), 6404–6413 (2021)Zhu, W., Sun, C., Wang, Q.: Distributed finite-time optimization algorithms for multi-agent systems under directed graphs. Int. J. Robust Nonlinear Control

**33**(11), 6286–6307 (2023)Rahili, S., Ren, W.: Distributed continuous-time convex optimization with time-varying cost functions. IEEE Trans. Autom. Control

**62**(4), 1590–1605 (2017)Sun, S., Xu, J., Ren, W.: Distributed continuous-time algorithms for time-varying constrained convex optimization. IEEE Trans. Autom. Control

**68**(7), 3931–3946 (2023)Huang, B., Zou, Y., Meng, Z.: Distributed continuous-time constrained convex optimization with general time-varying cost functions. Int. J. Robust Nonlinear Control

**31**(6), 2222–2236 (2021)Ding, Z.: Distributed time-varying optimization-an output regulation approach. IEEE Trans. Cybern.

**54**(4), 2155–2165 (2024)Zhang, J., You, K.: Distributed optimization in multi-agent networks using one-bit of relative state information. In: Uncertainty in Complex Networked Systems: In Honor of Roberto Tempo, pp. 449–479. Springer, Cham (2018)

Wang, D., Liu, J., Lian, J., Liu, Y., Wang, Z., Wang, W.: Distributed delayed dual averaging for distributed optimization over time-varying digraphs. Automatica

**150**, 110869 (2023)Oh, K.-K., Ahn, H.-S.: Distance-based undirected formations of single-integrator and double-integrator modeled agents in n-dimensional space. Int. J. Robust Nonlinear Control

**24**(12), 1809–1820 (2014)Mou, S., Belabbas, M.-A., Morse, A.S., Sun, Z., Anderson, B.D.: Undirected rigid formations are problematic. IEEE Trans. Autom. Control

**61**(10), 2821–2836 (2015)Sun, Z., Mou, S., Anderson, B.D., Morse, A.S.: Rigid motions of 3D undirected formations with mismatch between desired distances. IEEE Trans. Autom. Control

**62**(8), 4151–4158 (2017)Colombo, L., Diego, D.M.: Noether symmetries and decay laws in formation control of multi-agent systems. IFAC-PapersOnLine

**54**(19), 76–81 (2021)Colombo, L.J., Marina, H.G.: Forced variational integrators for the formation control of multiagent systems. IEEE Trans. Control Netw. Syst.

**8**(3), 1336–1347 (2021)Liu, Y., Liu, J., He, Z., Li, Z., Zhang, Q., Ding, Z.: A survey of multi-agent systems on distributed formation control. Unmanned Syst.

**12**, 1–14 (2023)Sun, Z., Liu, Q., Yu, C., Anderson, B.D.: Generalized controllers for rigid formation stabilization with application to event-based controller design. In: 2015 European Control Conference (ECC), pp. 217–222. IEEE (2015)

Marina, H.G., Sun, Z., Cao, M., Anderson, B.D.: Controlling a triangular flexible formation of autonomous agents. IFAC-PapersOnLine

**50**(1), 594–600 (2017)Liu, Y., Lin, X., Zhang, C.: Affine formation maneuver control for multi-heterogeneous unmanned surface vessels in narrow channel environments. J. Mar. Sci. Eng.

**11**(9), 1811 (2023)De Marina, H.G., Jayawardhana, B., Cao, M.: Taming mismatches in inter-agent distances for the formation-motion control of second-order agents. IEEE Trans. Autom. Control

**63**(2), 449–462 (2018)Beckers, T., Hirche, S., Colombo, L.: Online learning-based formation control of multi-agent systems with Gaussian processes. In: 2021 60th IEEE Conference on Decision and Control (CDC), pp. 2197–2202. IEEE (2021)

Ding, C., Wei, R., Liu, F.: Prescribed-time distributed optimization for time-varying objective functions: a perspective from time-domain transformation. J. Frankl. Inst.

**359**(17), 10267–10280 (2022)Slotine, J.-J.E.: Applied Nonlinear Control, vol. 2, pp. 1123–1131. Prentice Hall, Hoboken (1991)

De Marina, H.G., Jayawardhana, B., Cao, M.: Distributed scaling control of rigid formations. In: 2016 IEEE 55th Conference on Decision and Control (CDC), pp. 5140–5145. IEEE (2016)

Sun, C., Feng, Z., Hu, G.: Time-varying optimization-based approach for distributed formation of uncertain Euler-Lagrange systems. IEEE Trans. Cybern.

**52**(7), 5984–5998 (2022)Dong, X., Yu, B., Shi, Z., Zhong, Y.: Time-varying formation control for unmanned aerial vehicles: theories and applications. IEEE Trans. Control Syst. Technol.

**23**(1), 340–348 (2015)Zhang, X., Wu, J., Zhan, X., Han, T., Yan, H.: Observer-based adaptive time-varying formation-containment tracking for multiagent system with bounded unknown input. IEEE Trans. Syst. Man Cybern. Syst.

**53**(3), 1479–1491 (2023)De Marina, H.G., Jayawardhana, B., Cao, M.: Distributed rotational and translational maneuvering of rigid formations and their applications. IEEE Trans. Robot.

**32**(3), 684–697 (2016)

## Acknowledgements

The research is supported by grants from the National Natural Science Foundation of China (Nos. 62172188, 52072130) and the Guangdong Basic and Applied Basic Research Foundation (Nos. 2021A1515011753, 2024A1515010272).

## Funding

National Natural Science Foundation of China (Nos. 62172188, 52072130), Natural Science Foundation of Guangdong Province in China (2021A1515011753, 2024A1515010272).

## Author information

### Authors and Affiliations

### Contributions

Xiaotang Zhang: Theoretical analysis, methodology, manuscript writing; Manchun Tan: Funding acquisition, supervision and editing; Siman Lin: Discussion, manuscript preparation.

### Corresponding author

## Ethics declarations

### Conflict of interest

The authors declare that they have no conflict of interest.

## Additional information

### Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

## About this article

### Cite this article

Zhang, X., Lin, S. & Tan, M. Distributed continuous-time algorithm for time-varying optimization with desired shape constraints.
*Nonlinear Dyn* **112**, 18079–18097 (2024). https://doi.org/10.1007/s11071-024-09952-7

Received:

Accepted:

Published:

Issue Date:

DOI: https://doi.org/10.1007/s11071-024-09952-7