. "Torun" . "978-3-642-31463-6" . "Berlin" . "Lecture Notes in Computer Science. Volume 7203" . "This paper deals with an analysis of various parallelization strategies for the TFETI algorithm. The data distributions and the most relevant actions are discussed, especially those concerning coarse problem. Being a part of the coarse space projector, it couples all the subdomains computations and accelerates convergence. Its direct solution is more robust but complicates the massively parallel implementation. Presented numerical results confirm high parallel scalability of the coarse problem solution if the dual constraint matrix is distributed and then orthonormalized in parallel. Benchmarks were implemented using PETSc parallelization library and run on HECToR service at EPCC in Edinburgh."@en . . "TFETI Coarse Space Projectors Parallelization Strategies"@en . "0302-9743" . . "2"^^ . "TFETI Coarse Space Projectors Parallelization Strategies" . . . . "Springer-Verlag" . "TFETI Coarse Space Projectors Parallelization Strategies"@en . "10.1007/978-3-642-31464-3_16" . . "27740" . "RIV/61989100:27740/12:86084421" . "174169" . . . "P(ED1.1.00/02.0070), S" . . "Hor\u00E1k, David" . . "TFETI Coarse Space Projectors Parallelization Strategies" . "[45B98BD3E68D]" . . "RIV/61989100:27740/12:86084421!RIV13-MSM-27740___" . "2011-09-11+02:00"^^ . . . . "000313192800016" . . . . "Hapla, V\u00E1clav" . . "coarse problem; natural coarse space; PETSc; TFETI; Total FETI; FETI; domain decomposition"@en . "11"^^ . . . "This paper deals with an analysis of various parallelization strategies for the TFETI algorithm. The data distributions and the most relevant actions are discussed, especially those concerning coarse problem. Being a part of the coarse space projector, it couples all the subdomains computations and accelerates convergence. Its direct solution is more robust but complicates the massively parallel implementation. Presented numerical results confirm high parallel scalability of the coarse problem solution if the dual constraint matrix is distributed and then orthonormalized in parallel. Benchmarks were implemented using PETSc parallelization library and run on HECToR service at EPCC in Edinburgh." . . "2"^^ . .