Here's a page to share important papers, review articles, relevant news, etc.

Wrap-up Discussion

Topics/Questions for discussion at 9:30 am on Thursday June 28: Please add your ideas below Our final discussion on Thursday will look at where we've been and where we should go. Please contribute your thoughts so we make sure to cover the complete gamut on Thursday.

Suggestion 1 (Petros Koumoutsakos)

Identify 2-3 problems with multiple, strongly coupled interacting time and/or space and/or structure scales -
Can these be used for benchmarking/common developments for multi scale algorithms ?

Consolidation/checking of what we've learned on atomistic to coarse-grained

are IBI, RMC, force matching, relative entropy equally good and bad?

what if I need to fit multiple CG sites?

can I use relative entropy to improve IBI?

do I gain anything extra from RMC - isn't it just slower?

can I have an autofix for the times when force matching goes wrong?

what if I want to guarantee I get the same chemical potential as my atomistic model?

How do I decide which type of thermostat to use for nonequilibrium problems ? How do I know I did it correctly ?

How can we tell up front for which quantities a coarse grained model will give the right answer ? Is there a systematic way to tell ? Or only on a case by case basis ?

Suggestion 2 (Joerg Rottler)

I would like to raise again the question of coarse-graining in time vs coarse-graining in space.
What options do we have to go beyond the nanosecond regime of MD, and for which classes of
problems do these methods work? If we do nonequilibrium MD, how do we extrapolate to the
experimental time scales and how do we make sure that we have not missed any "rare events"
that would change the physics?

Suggestions 3 (Markus Deserno)

I have picked a set of issues I personally find interesting and have phrased associated challenges as questions. I have entirely restricted to equilibrium, since I feel the number of fundamental questions we do not yet understand is plenty.

Representability: How can we know ahead of time which observables are correctly represented in a coarse grained model? And if we know they will not, can we at least know ahead of time how big the systematic error might be: ten percent? factor of two? four orders of magnitude? Can we put a (systematic) error bar on multiscaling? If not, what does it mean to "predict" something?

Target: What are the right things we should calculate using multiscaling? Is the aim really to get accurate numbers? Does the answer to that question depend on whether we're scientists or engineers? What should we specifically strive for if we're physicists?

Minimal models: Coarse graining might teach us about the minimal physical input for a given phenomenon of interest. If we have managed to build an "irreducible model", most likely its enthalpic and entropic contributions balance in a way that resembles the real situation. Could it be that systematic coarse graining often works because it assures that this balance is achieved? If so, is there a genuine and quantifiable benefit of additionally getting every single detail of a coarse grained interaction right?

Preconditions: How strongly does the success of scale-bridging depend on our pre-existing understanding of the physics on larger scales, or even on its universality? For instance, polymer science is awesome for physicists, because there's so much universality in it. Hence, systematic course graining better aims at the non-universal prefactors. If these can be extracted from fairly local physics, coarse graining will work very well, but can we expect such a nice scenario for other systems that don't exhibit a comparable degree of universality (e.g. proteins or composites)? This conceivably boils down to the question: For which type of systems is multiscaling a promising strategy?

Mixtures: In an N-component system we have O(N2) cross-interactions. This becomes a severe cross-parametrization problem if we wish to construct coarse grained potentials from the bottom up. Do we need to follow a different strategy here? (Compare how MARTINI, which is parametrized on mixing properties, aims to address this problem!)

Successes: What are the big breakthroughs we have already achieved, especially in the light of existing expectations harbored outside of our community (which we presumably at some point in the past have raised)? Has coarse graining delivered or fallen short of its promises? Has this consequences for how we as a community advertise its merits?

Sophistication: Will our field be more and more strongly dominated by a handful of extremely sophisticated software packages that fewer and fewer people truly understand (compare e.g. to the current state of quantum chemistry)? Is this OK? Is this inevitable? If so, how would we minimize the obvious dangers that go along with such a trend?

Tuning: The development of CG potentials can be automated if (and only if!) we have a straightforward target to optimize, e.g. match g(r), match forces, minimize the relative entropy. But often we have a lot of extra physical intuition about what our CG model should achieve or avoid, but this is not easily formalized or quantified. How do we account for this? How do we address growing expectations that CG models ought to be constructed using some sort of automatization?

Abstract-Peter Daivis,

Tuesday, June 12:

In this talk, I will discuss two examples of the fruitful relationship between molecular
dynamics simulations and continuum physics. The first is fluid slip, which can be studied
directly by non-equilibrium molecular dynamics simulations, by equilibrium molecular
dynamics, or from the continuum point of view, by introducing the slip friction coefficient
or slip length. The second is an attempt to provide a thermodynamic analysis of homogeneous
thermostats for nonequilibrium molecular dynamics simulations of far-from-equilibrium
shearing liquids.

Anyone who is interested in the derivation of the expressions for kinetic and configurational temperatures using the methods I mentioned in my second talk, is welcome to look at my notes on the subject. They are similar in content to the papers by Powles (Mol Phys 103, 1361 (2005)) and Jepps (Phys Rev E 62 4757 (2000)), a little more detailed in some respects, and less rigorous. A less detailed version will appear in a paper that is in preparation. notes on temperature

Subtopic discussion list:

1. How structure determines coarse-grained potentials and vice versa

(Will Noid)
Certain "bottom-up" approaches to deriving coarse-grained potentials are often classified as force-based or structure-based. In previous work, we demonstrated that force-based coarse-graining can be accomplished without force information via a generalization of the Yvon-Born-Green integral equation theory. In the upcoming talk, I will discuss our more recent efforts to understand the relationship between force- and structure-motivated coarse-graining approaches by (1) formulating force-based coarse-graining within an information theoretic framework in analogy to that previously developed by Shell for iterative structure-based approaches; and (2) demonstrating that mean forces provide a unifying framework for relating force- and structure-motivated coarse-graining approaches and for understanding how many-body correlations influence coarse-grained potentials.

2. Reaction coordinates, collective variables and coarse graining degrees of freedom

Discussion session (suggested by Peter Bolhuis)

When describing a dynamical system on a coarse level one needs to know which degrees of freedom to keep, and which can be integrated out. This question is related to the problem of finding a reaction coordinate that describes a dynamical (rare event) process.
In coarse graining an atomistic system one integrates out the fast and irrelevant degrees of freedom, that represent only noise, while keeping the slow (collective) degrees of freedom. Providing that there are no memory effects, the system can then in principle be described by a Langevin equation in the coarse grained collective variable, with an effective diffusion constant.
This procedure is roughly the same as that in studying a rare event dynamics between stable states. There one is interested in the collective variable that describes the transition between metastable states. Again, the goal is to come up with a low dimensional collective variable, the reaction coordinate to describe the process. In principle, this transition could then be again described by a Langevin equation in the coarse grained reaction coordinate.

In this discussion we would like to find out the relations between these two approaches, and what can we learn from both.

This informal discussion will take place on Monday afternoon 14 May at 1:30 pm in the Auditorium.

3. Representability problems

(Ard Louis)
Here are is a paper that I will be discussion on Tuesday --

.

No free lunch for effective potentials: general comment for Faraday FD144: I briefly review the problem of representability -- a single coarse-grained effective pair potential cannot simultaneously represent all the properties of an underlying more complex system -- as well as a few other subtleties that can arise in interpreting coarse-grained potentials.

A.A. Louis

if there is enough time I may also discuss some ideas on coarse-graining dynamics:

I briefly review some concepts related to coarse-graining methods for the dynamics of soft matter systems and argue that such schemes will almost always need to telescope down the physical hierarchy of time-scales to a more compressed, but more computationally manageable, separation.

4. Hybrid all-atom, coarse-grained models

(suggested by Siewert-Jan Marrink)

Simulations in which the most critical part of the system is treated at an all-atom (AA) level and the surrounding is
treated at a lower, coarse-grained (CG), resolution, form a nice multiscale challenge. As explained by Kurt,
this can be done either in a static approach, where the resolution of the molecules does not change (similar to QM/MM methods), or in
a dynamic way involving resolution transformation on the fly.

I want to discuss the static approach, which is ideally suited for applications such as, e.g., protein folding (in a heterogeneous environment like a membrane), protein-protein recognition, protein-lipid interactions, ligand binding, and many more.

The main challenge of such a method lies in the treatment and parameterization of the cross-interactions between the AA and CG
degrees of freedom. Questions open for discussion are i) Does one need to fully reparameterize the cross interactions or can one base it on
existing AA and CG force fields, ii) How does one treat the electrostatic coupling (most AA force fields have -partial- charges, most CG force fields do not), and iii) How to avoid artefacts due to the incompatibility of bead sizes, iv) ...... v) ... mcxxv) ... mcxxvi). (no more please).

6. Coarse-Grained Polymer Simulation Software in LAMMPS

Andrew Jewett recommended software package for text-manipulation tool for constructing LAMMPS simulation input files. (It can also be used to generate input for other molecular dynamics simulation software.)

## Table of Contents

Here's a page to share important papers, review articles, relevant news, etc.

Wrap-up DiscussionTopics/Questions for discussion at 9:30 am on Thursday June 28: Please add your ideas belowOur final discussion on Thursday will look at where we've been and where we should go. Please contribute your thoughts so we make sure to cover the complete gamut on Thursday.

Identify 2-3 problems with multiple, strongly coupled interacting time and/or space and/or structure scales -Suggestion 1 (Petros Koumoutsakos)Can these be used for benchmarking/common developments for multi scale algorithms ?

Consolidation/checking of what we've learned on atomistic to coarse-grained

I would like to raise again the question of coarse-graining in time vs coarse-graining in space.Suggestion 2 (Joerg Rottler)What options do we have to go beyond the nanosecond regime of MD, and for which classes of

problems do these methods work? If we do nonequilibrium MD, how do we extrapolate to the

experimental time scales and how do we make sure that we have not missed any "rare events"

that would change the physics?

I have picked a set of issues I personally find interesting and have phrased associated challenges as questions. I have entirely restricted to equilibrium, since I feel the number of fundamental questions we do not yet understand is plenty.Suggestions 3 (Markus Deserno)Representability: How can we knowahead of timewhich observables are correctly represented in a coarse grained model? And if we know they will not, can we at least know ahead of time how big the systematic error might be: ten percent? factor of two? four orders of magnitude? Can we put a (systematic) error bar on multiscaling? If not, what does it mean to "predict" something?Target: What are the right things we should calculate using multiscaling? Is the aim really to get accurate numbers? Does the answer to that question depend on whether we're scientists or engineers? What should we specifically strive for if we're physicists?Minimal models: Coarse graining might teach us about the minimal physical input for a given phenomenon of interest. If we have managed to build an "irreducible model", most likely its enthalpic and entropic contributions balance in a way that resembles the real situation. Could it be that systematic coarse graining often works because it assures that this balance is achieved? If so, is there a genuine and quantifiable benefit of additionally getting every single detail of a coarse grained interaction right?Preconditions: How strongly does the success of scale-bridging depend on our pre-existing understanding of the physics on larger scales, or even on its universality? For instance, polymer science is awesome for physicists, because there's so much universality in it. Hence,systematiccourse graining better aims at the non-universal prefactors. If these can be extracted from fairly local physics, coarse graining will work very well, but can we expect such a nice scenario for other systems that don't exhibit a comparable degree of universality (e.g. proteins or composites)? This conceivably boils down to the question:For which type of systems is multiscaling a promising strategy?Mixtures: In anN-component system we have O(N2) cross-interactions. This becomes a severe cross-parametrization problem if we wish to construct coarse grained potentials from the bottom up. Do we need to follow a different strategy here? (Compare how MARTINI, which is parametrized on mixing properties, aims to address this problem!)Successes: What are the big breakthroughs we have already achieved, especially in the light of existing expectations harbored outside of our community (which we presumably at some point in the past have raised)? Has coarse graining delivered or fallen short of its promises? Has this consequences for how we as a community advertise its merits?Sophistication: Will our field be more and more strongly dominated by a handful of extremely sophisticated software packages that fewer and fewer people truly understand (compare e.g. to the current state of quantum chemistry)? Is this OK? Is this inevitable? If so, how would we minimize the obvious dangers that go along with such a trend?Tuning: The development of CG potentials can be automated if (and only if!) we have a straightforward target to optimize, e.g. match g(r), match forces, minimize the relative entropy. But often we have a lot of extra physical intuition about what our CG model should achieve or avoid, but this is not easily formalized or quantified. How do we account for this? How do we address growing expectations that CG models ought to be constructed using some sort of automatization?Abstract-Peter Daivis,Tuesday, June 12:Anyone who is interested in the derivation of the expressions for kinetic and configurational temperatures using the methods I mentioned in my second talk, is welcome to look at my notes on the subject. They are similar in content to the papers by Powles (Mol Phys 103, 1361 (2005)) and Jepps (Phys Rev E 62 4757 (2000)), a little more detailed in some respects, and less rigorous. A less detailed version will appear in a paper that is in preparation.

notes on temperature

## Subtopic discussion list:

## 1. How structure determines coarse-grained potentials and vice versa

(Will Noid)Certain "bottom-up" approaches to deriving coarse-grained potentials are often classified as force-based or structure-based. In previous work, we demonstrated that force-based coarse-graining can be accomplished without force information via a generalization of the Yvon-Born-Green integral equation theory. In the upcoming talk, I will discuss our more recent efforts to understand the relationship between force- and structure-motivated coarse-graining approaches by (1) formulating force-based coarse-graining within an information theoretic framework in analogy to that previously developed by Shell for iterative structure-based approaches; and (2) demonstrating that mean forces provide a unifying framework for relating force- and structure-motivated coarse-graining approaches and for understanding how many-body correlations influence coarse-grained potentials.

Related reading.

Background on force-based coarse-graining:

The multiscale coarse-graining method. I. A rigorous bridge between atomistic and coarse-grained models

Background on generalized Yvon-Born-Green theory:

Generalized Yvon-Born-Green Theory for Molecular Systems

A Generalized-Yvon−Born−Green Theory for Determining Coarse-Grained Interaction Potentials

Recovering physical potentials from a model protein databank

Information theoretic framework:

Coarse-graining entropy, forces, and structures.

Mean forces and many-body correlations:

The Role of Many-Body Correlations in Determining Potentials for Coarse-Grained Models of Equilibrium Structure

## 2. Reaction coordinates, collective variables and coarse graining degrees of freedom

Discussion session(suggested by Peter Bolhuis)When describing a dynamical system on a coarse level one needs to know which degrees of freedom to keep, and which can be integrated out. This question is related to the problem of finding a reaction coordinate that describes a dynamical (rare event) process.

In coarse graining an atomistic system one integrates out the fast and irrelevant degrees of freedom, that represent only noise, while keeping the slow (collective) degrees of freedom. Providing that there are no memory effects, the system can then in principle be described by a Langevin equation in the coarse grained collective variable, with an effective diffusion constant.

This procedure is roughly the same as that in studying a rare event dynamics between stable states. There one is interested in the collective variable that describes the transition between metastable states. Again, the goal is to come up with a low dimensional collective variable, the reaction coordinate to describe the process. In principle, this transition could then be again described by a Langevin equation in the coarse grained reaction coordinate.

In this discussion we would like to find out the relations between these two approaches, and what can we learn from both.

This informal discussion will take place on Monday afternoon 14 May at 1:30 pm in the Auditorium.

(Ard Louis)3. Representability problemsHere are is a paper that I will be discussion on Tuesday --

.I briefly review the problem of representability -- a single coarse-grained effective pair potential cannot simultaneously represent all the properties of an underlying more complex system -- as well as a few other subtleties that can arise in interpreting coarse-grained potentials.

A.A. Louis

A.A. Louis

Coarse-graining dynamics by telescoping down time-scales: comment for Faraday FD144

## 4. Hybrid all-atom, coarse-grained models

(suggested by Siewert-Jan Marrink)Simulations in which the most critical part of the system is treated at an all-atom (AA) level and the surrounding is

treated at a lower, coarse-grained (CG), resolution, form a nice multiscale challenge. As explained by Kurt,

this can be done either in a static approach, where the resolution of the molecules does not change (similar to QM/MM methods), or in

a dynamic way involving resolution transformation on the fly.

I want to discuss the static approach, which is ideally suited for applications such as, e.g., protein folding (in a heterogeneous environment like a membrane), protein-protein recognition, protein-lipid interactions, ligand binding, and many more.

The main challenge of such a method lies in the treatment and parameterization of the cross-interactions between the AA and CG

degrees of freedom. Questions open for discussion are i) Does one need to fully reparameterize the cross interactions or can one base it on

existing AA and CG force fields, ii) How does one treat the electrostatic coupling (most AA force fields have -partial- charges, most CG force fields do not), and iii) How to avoid artefacts due to the incompatibility of bead sizes, iv) ...... v) ... mcxxv) ... mcxxvi). (no more please).

5. Nanoparticles in Polymer MeltsReferences suggested by M. RubinsteinDiffusion of nanoparticles in melts: Brochard-Wyart, F.; de Gennes, P. G. Eur. Phys. J. E 2000, 1, 93–97.

More detailed analysis of mobilities on shorter time scales and extension to polymer solutions:

"Mobility of Non-sticky Nanoparticles in Polymer Liquids" by L-H Cai, S Panyukov, and M. Rubinstein, macromolecules, 44, 7853-7863 (2011).Surface effects on polymer mobility (possibly related to Tg-shift):

Experiment with sandwich of thin polymer films on substrate by dynamic SIMS: "Long-Range Effects on Polymer Diffusion Induced by a Bounding Surface" by X. Zheng, M.H. Rafailovich, J. Sokolov, Y. Strrzhemechny, S.A. Schwarz, B.B. Sauer, and M. Rubinstein, Phys. Rev. Lett. 79, 241, 1997.

One of recent theoretical models of Tg-shift:

A Microscopic Model for the Reinforcement and the Nonlinear Behavior of Filled Elastomers and Thermoplastic Elastomers (Payne and Mullins Effects) by Samy Merabia, Paul Sotta, and Didier R. Long, Macromolecules, 2008, 41 (21), 8252-8266.6. Coarse-Grained Polymer Simulation Software in LAMMPSAndrew Jewett recommended software package fortext-manipulation tool for constructing LAMMPS simulation input files. (It can also be used to generate input for other molecular dynamics simulation software.)