svn commit: r1750056 - in /tomcat/trunk/webapps/docs: changelog.xml config/realm.xml

2016-06-24 Thread markt
Author: markt
Date: Fri Jun 24 09:29:11 2016
New Revision: 1750056

URL: http://svn.apache.org/viewvc?rev=1750056&view=rev
Log:
Follow-up to BZ 59399. Document NullRealm and transportGuaranteeRedirectStatus 
for all Realms.

Modified:
tomcat/trunk/webapps/docs/changelog.xml
tomcat/trunk/webapps/docs/config/realm.xml

Modified: tomcat/trunk/webapps/docs/changelog.xml
URL: 
http://svn.apache.org/viewvc/tomcat/trunk/webapps/docs/changelog.xml?rev=1750056&r1=1750055&r2=1750056&view=diff
==
--- tomcat/trunk/webapps/docs/changelog.xml (original)
+++ tomcat/trunk/webapps/docs/changelog.xml Fri Jun 24 09:29:11 2016
@@ -134,6 +134,12 @@
 Manager and HostManager applications now have a
 RemoteAddrValve configured by default. (markt)
   
+  
+Follow-up to the fix for 59399. Ensure that the new 
attribute
+transportGuaranteeRedirectStatus is documented for all
+Realms. Also document the NullRealm and
+when it is automatically created for an Engine. 
(markt)
+  
 
   
   

Modified: tomcat/trunk/webapps/docs/config/realm.xml
URL: 
http://svn.apache.org/viewvc/tomcat/trunk/webapps/docs/config/realm.xml?rev=1750056&r1=1750055&r2=1750056&view=diff
==
--- tomcat/trunk/webapps/docs/config/realm.xml (original)
+++ tomcat/trunk/webapps/docs/config/realm.xml Fri Jun 24 09:29:11 2016
@@ -49,8 +49,9 @@
   this one Realm may itself contain multiple nested Realms). In addition, the
   Realm associated with an Engine or a Host is automatically inherited by
   lower-level containers unless the lower level container explicitly defines 
its
-  own Realm.
-  
+  own Realm. If no Realm is configured for the Engine, an instance of the
+  Null Realm
+  will be configured for the Engine automatically.
 
   For more in-depth information about container managed security in web
   applications, as well as more information on configuring and using the
@@ -161,7 +162,7 @@
   
 The HTTP status code to use when the container needs to issue an 
HTTP
redirect to meet the requirements of a configured transport
-   guarantee. The prpvoded status code is not validated. If not
+   guarantee. The provided status code is not validated. If not
specified, the default value of 302 is used.
   
 
@@ -272,6 +273,13 @@
 a rare case when it can be omitted.
   
 
+  
+The HTTP status code to use when the container needs to issue an 
HTTP
+   redirect to meet the requirements of a configured transport
+   guarantee. The provided status code is not validated. If not
+   specified, the default value of 302 is used.
+  
+
   
 When processing users authenticated via the GSS-API, this attribute
 controls if any "@..." is removed from the end of the user
@@ -592,6 +600,13 @@
 limit.
   
 
+  
+The HTTP status code to use when the container needs to issue an 
HTTP
+   redirect to meet the requirements of a configured transport
+   guarantee. The provided status code is not validated. If not
+   specified, the default value of 302 is used.
+  
+
   
 When the JNDIRealm is used with the SPNEGO authenticator, delegated
 credentials for the user may be available. If such credentials are
@@ -736,6 +751,13 @@
 that this realm will use for user, password and role information.
   
 
+  
+The HTTP status code to use when the container needs to issue an 
HTTP
+   redirect to meet the requirements of a configured transport
+   guarantee. The provided status code is not validated. If not
+   specified, the default value of 302 is used.
+  
+
   
 When using X509 client certificates, this specifies the class name
 that will be used to retrieve the user name from the certificate.
@@ -797,6 +819,13 @@
 name. If not specified, the default is true.
   
 
+  
+The HTTP status code to use when the container needs to issue an 
HTTP
+   redirect to meet the requirements of a configured transport
+   guarantee. The provided status code is not validated. If not
+   specified, the default value of 302 is used.
+  
+
   
 When using X509 client certificates, this specifies the class name
 that will be used to retrieve the user name from the certificate.
@@ -906,6 +935,13 @@
 name. If not specified, the default is true.
   
 
+  
+The HTTP status code to use when the container needs to issue an 
HTTP
+   redirect to meet the requirements of a configured transport
+   guarantee. The provided status code is not validated. If not
+   specified, the default value of 302 is used.
+  
+
 

svn commit: r1750057 - in /tomcat/tc8.5.x/trunk: ./ webapps/docs/changelog.xml webapps/docs/config/realm.xml

2016-06-24 Thread markt
Author: markt
Date: Fri Jun 24 09:34:28 2016
New Revision: 1750057

URL: http://svn.apache.org/viewvc?rev=1750057&view=rev
Log:
Follow-up to BZ 59399. Document NullRealm and transportGuaranteeRedirectStatus 
for all Realms.

Modified:
tomcat/tc8.5.x/trunk/   (props changed)
tomcat/tc8.5.x/trunk/webapps/docs/changelog.xml
tomcat/tc8.5.x/trunk/webapps/docs/config/realm.xml

Propchange: tomcat/tc8.5.x/trunk/
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Fri Jun 24 09:34:28 2016
@@ -1 +1 @@
-/tomcat/trunk:1734785,1734799,1734845,1734928,1735041,1735044,1735480,1735577,1735597,1735599-1735600,1735615,1736145,1736162,1736209,1736280,1736297,1736299,1736489,1736646,1736703,1736836,1736849,1737104-1737105,1737112,1737117,1737119-1737120,1737155,1737157,1737192,1737280,1737339,1737632,1737664,1737715,1737748,1737785,1737834,1737860,1737903,1737959,1738005,1738007,1738014-1738015,1738018,1738022,1738039,1738043,1738059-1738060,1738147,1738149,1738174-1738175,1738261,1738589,1738623-1738625,1738643,1738816,1738850,1738855,1738946-1738948,1738953-1738954,1738979,1738982,1739079-1739081,1739087,1739113,1739153,1739172,1739176,1739191,1739474,1739726,1739762,1739775,1739814,1739817-1739818,1739975,1740131,1740324,1740465,1740495,1740508-1740509,1740520,1740535,1740707,1740803,1740810,1740969,1740980,1740991,1740997,1741015,1741033,1741036,1741058,1741060,1741080,1741147,1741159,1741164,1741173,1741181,1741190,1741197,1741202,1741208,1741213,1741221,1741225,1741232,1741409,1741501
 
,1741677,1741892,1741896,1741984,1742023,1742042,1742071,1742090,1742093,1742101,1742105,1742111,1742139,1742146,1742148,1742166,1742181,1742184,1742187,1742246,1742248-1742251,1742263-1742264,1742268,1742276,1742369,1742387,1742448,1742509-1742512,1742917,1742919,1742933,1742975-1742976,1742984,1742986,1743019,1743115,1743117,1743124-1743125,1743134,1743425,1743554,1743679,1743696-1743698,1743700-1743701,1744058,1744064-1744065,1744125,1744194,1744229,1744270,1744323,1744432,1744684,1744697,1744705,1744713,1744760,1744786,1745142-1745143,1745145,1745177,1745179-1745180,1745227,1745248,1745254,1745337,1745467,1745576,1745735,1745744,1746304,1746306-1746307,1746319,1746327,1746338,1746340-1746341,1746344,1746427,1746441,1746473,1746490,1746492,1746495-1746496,1746499-1746501,1746503-1746507,1746509,1746549,1746551,1746554,1746556,1746558,1746584,1746620,1746649,1746724,1746939,1746989,1747014,1747028,1747035,1747210,1747225,1747234,1747253,1747404,1747506,1747536,1747924,1747980,1747
 
993,1748001,1748253,1748452,1748547,1748629,1748676,1748715,1749287,1749296,1749328,1749373,1749465,1749506,1749508,1749665-1749666,1749763,1749865-1749866,1749898,1749978,1749980,1750011,1750015
+/tomcat/trunk:1734785,1734799,1734845,1734928,1735041,1735044,1735480,1735577,1735597,1735599-1735600,1735615,1736145,1736162,1736209,1736280,1736297,1736299,1736489,1736646,1736703,1736836,1736849,1737104-1737105,1737112,1737117,1737119-1737120,1737155,1737157,1737192,1737280,1737339,1737632,1737664,1737715,1737748,1737785,1737834,1737860,1737903,1737959,1738005,1738007,1738014-1738015,1738018,1738022,1738039,1738043,1738059-1738060,1738147,1738149,1738174-1738175,1738261,1738589,1738623-1738625,1738643,1738816,1738850,1738855,1738946-1738948,1738953-1738954,1738979,1738982,1739079-1739081,1739087,1739113,1739153,1739172,1739176,1739191,1739474,1739726,1739762,1739775,1739814,1739817-1739818,1739975,1740131,1740324,1740465,1740495,1740508-1740509,1740520,1740535,1740707,1740803,1740810,1740969,1740980,1740991,1740997,1741015,1741033,1741036,1741058,1741060,1741080,1741147,1741159,1741164,1741173,1741181,1741190,1741197,1741202,1741208,1741213,1741221,1741225,1741232,1741409,1741501
 
,1741677,1741892,1741896,1741984,1742023,1742042,1742071,1742090,1742093,1742101,1742105,1742111,1742139,1742146,1742148,1742166,1742181,1742184,1742187,1742246,1742248-1742251,1742263-1742264,1742268,1742276,1742369,1742387,1742448,1742509-1742512,1742917,1742919,1742933,1742975-1742976,1742984,1742986,1743019,1743115,1743117,1743124-1743125,1743134,1743425,1743554,1743679,1743696-1743698,1743700-1743701,1744058,1744064-1744065,1744125,1744194,1744229,1744270,1744323,1744432,1744684,1744697,1744705,1744713,1744760,1744786,1745142-1745143,1745145,1745177,1745179-1745180,1745227,1745248,1745254,1745337,1745467,1745576,1745735,1745744,1746304,1746306-1746307,1746319,1746327,1746338,1746340-1746341,1746344,1746427,1746441,1746473,1746490,1746492,1746495-1746496,1746499-1746501,1746503-1746507,1746509,1746549,1746551,1746554,1746556,1746558,1746584,1746620,1746649,1746724,1746939,1746989,1747014,1747028,1747035,1747210,1747225,1747234,1747253,1747404,1747506,1747536,1747924,1747980,1747
 
993,1748001,1748253,1748452,1748547,1748629,1748676,1748715,1749287,1749296,1749328,1749373,1749465,1749506,1749508,1749665-1749666,1749763,1749865-1749866,1749898,1749978,1749980,1750011,1750015,1750056

Modified: to

svn commit: r1750058 - in /tomcat/tc8.0.x/trunk: ./ webapps/docs/changelog.xml webapps/docs/config/realm.xml

2016-06-24 Thread markt
Author: markt
Date: Fri Jun 24 09:34:50 2016
New Revision: 1750058

URL: http://svn.apache.org/viewvc?rev=1750058&view=rev
Log:
Follow-up to BZ 59399. Document NullRealm and transportGuaranteeRedirectStatus 
for all Realms.

Modified:
tomcat/tc8.0.x/trunk/   (props changed)
tomcat/tc8.0.x/trunk/webapps/docs/changelog.xml
tomcat/tc8.0.x/trunk/webapps/docs/config/realm.xml

Propchange: tomcat/tc8.0.x/trunk/
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Fri Jun 24 09:34:50 2016
@@ -1,2 +1,2 @@
 
/tomcat/tc8.5.x/trunk:1735042,1737966,1743139-1743140,1744151,1747537,1747925,1748002
-/tomcat/trunk:1636524,1637156,1637176,1637188,1637331,1637684,1637695,1637890,1637892,1638720-1638725,1639653,1640010,1640083-1640084,1640088,1640275,1640322,1640347,1640361,1640365,1640403,1640410,1640652,1640655-1640658,1640688,1640700-1640883,1640903,1640976,1640978,1641000,1641026,1641038-1641039,1641051-1641052,1641058,1641064,1641300,1641369,1641374,1641380,1641486,1641634,1641656-1641692,1641704,1641707-1641718,1641720-1641722,1641735,1641981,1642233,1642280,1642554,1642564,1642595,1642606,1642668,1642679,1642697,1642699,1642766,1643002,1643045,1643054-1643055,1643066,1643121,1643128,1643206,1643209-1643210,1643216,1643249,1643270,1643283,1643309-1643310,1643323,1643365-1643366,1643370-1643371,1643465,1643474,1643536,1643570,1643634,1643649,1643651,1643654,1643675,1643731,1643733-1643734,1643761,1643766,1643814,1643937,1643963,1644017,1644169,1644201-1644203,1644321,1644323,1644516,1644523,1644529,1644535,1644730,1644768,1644784-1644785,1644790,1644793,1644815,1644884,1644886
 
,1644890,1644892,1644910,1644924,1644929-1644930,1644935,1644989,1645011,1645247,1645355,1645357-1645358,1645455,1645465,1645469,1645471,1645473,1645475,1645486-1645488,1645626,1645641,1645685,1645743,1645763,1645951-1645953,1645955,1645993,1646098-1646106,1646178,1646220,1646302,1646304,1646420,1646470-1646471,1646476,1646559,1646717-1646723,1646773,1647026,1647042,1647530,1647655,1648304,1648815,1648907,1649973,1650081,1650365,1651116,1651120,1651280,1651470,1652938,1652970,1653041,1653471,1653550,1653574,1653797,1653815-1653816,1653819,1653840,1653857,1653888,1653972,1654013,1654030,1654050,1654123,1654148,1654159,1654513,1654515,1654517,1654522,1654524,1654725,1654735,1654766,1654785,1654851-1654852,1654978,1655122-1655124,1655126-1655127,1655129-1655130,1655132-1655133,1655312,1655351,1655438,1655441,1655454,168,1656087,1656299,1656319,1656331,1656345,1656350,1656590,1656648-1656650,1656657,1657041,1657054,1657374,1657492,1657510,1657565,1657580,1657584,1657586,1657589,1657
 
592,1657607,1657609,1657682,1657907,1658207,1658734,1658781,1658790,1658799,1658802,1658804,1658833,1658840,1658966,1659043,1659053,1659059,1659174,1659184,1659188-1659189,1659216,1659263,1659293,1659304,1659306-1659307,1659382,1659384,1659428,1659471,1659486,1659505,1659516,1659521,1659524,1659559,1659562,1659803,1659806,1659814,1659833,1659862,1659905,1659919,1659948,1659967,1659983-1659984,1660060,1660074,1660077,1660133,1660168,1660331-1660332,1660353,1660358,1660924,1661386,1661770,1661867,1661972,1661990,1662200,1662308-1662309,1662548,1662614,1662696,1662736,1662985,1662988-1662989,1663264,1663277,1663298,1663534,1663562,1663676,1663715,1663754,1663768,1663772,1663781,1663893,1663995,1664143,1664163,1664174,1664301,1664317,1664347,1664657,1664659,1664710,1664863-1664864,1664866,1665085,1665292,1665559,1665653,1665661,1665672,1665694,1665697,1665736,1665779,1665976-1665977,1665980-1665981,1665985-1665986,1665989,1665998,1666004,1666008,1666013,1666017,1666024,1666116,1666386-1
 
666387,1666494,1666496,1666552,1666569,1666579,137,149,1666757,1666966,1666972,1666985,1666995,1666997,1667292,1667402,1667406,1667546,1667615,1667630,1667636,1667688,1667764,1667871,1668026,1668135,1668193,1668593,1668596,1668630,1668639,1668843,1669353,1669370,1669451,1669800,1669838,1669876,1669882,1670394,1670433,1670591,1670598-1670600,1670610,1670631,1670719,1670724,1670726,1670730,1670940,1671112,1672272,1672284,1673754,1674294,1675461,1675486,1675594,1675830,1676231,1676250-1676251,1676364,1676381,1676393,1676479,1676525,1676552,1676615,1676630,1676634,1676721,1676926,1676943,1677140,1677802,1678011,1678162,1678174,1678339,1678426-1678427,1678694,1678701,1679534,1679708,1679710,1679716,1680034,1680246,1681056,1681123,1681138,1681280,1681283,1681286,1681450,1681697,1681699,1681701,1681729,1681770,1681779,1681793,1681807,1681837-1681838,1681854,1681862,1681958,1682028,1682033,1682311,1682315,1682317,1682320,1682324,1682330,1682842,1684172,1684366,1684383,1684526-168452
 
7,1684549-1684550,1685556,1685591,1685739,1685744,1685772,1685816,1685826,1685891,1687242,1687261,1687268,1687340,1687544,1687551,1688563,1688841,1688878,165,1688896,1688901,1689345-1689346,1689357,1689656,1689675-1689677,1689679,1689687,1689825,1689856,1689918,1690011,1690021,1690054,1690080,1690209,1691134,1691487,

svn commit: r1750059 - in /tomcat/tc7.0.x/trunk: ./ webapps/docs/changelog.xml webapps/docs/config/realm.xml

2016-06-24 Thread markt
Author: markt
Date: Fri Jun 24 09:36:54 2016
New Revision: 1750059

URL: http://svn.apache.org/viewvc?rev=1750059&view=rev
Log:
Follow-up to BZ 59399. Document NullRealm and transportGuaranteeRedirectStatus 
for all Realms.

Modified:
tomcat/tc7.0.x/trunk/   (props changed)
tomcat/tc7.0.x/trunk/webapps/docs/changelog.xml
tomcat/tc7.0.x/trunk/webapps/docs/config/realm.xml

Propchange: tomcat/tc7.0.x/trunk/
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Fri Jun 24 09:36:54 2016
@@ -1,3 +1,3 @@
 
/tomcat/tc8.0.x/trunk:1636525,1637336,1637685,1637709,1638726,1640089,1640276,1640349,1640363,1640366,1640642,1640672,1640674,1640689,1640884,1641001,1641065,1641067,1641375,1641638,1641723,1641726,1641729-1641730,1641736,1641988,1642669-1642670,1642698,1642701,1643205,1643215,1643217,1643230,1643232,1643273,1643285,1643329-1643330,1643511,1643513,1643521,1643539,1643571,1643581-1643582,1643635,1643655,1643738,1643964,1644018,1644333,1644954,1644992,1645014,1645360,1645456,1645627,1645642,1645686,1645903-1645904,1645908-1645909,1645913,1645920,1646458,1646460-1646462,1646735,1646738-1646741,1646744,1646746,1646748-1646755,1646757,1646759-1646760,1647043,1648816,1651420-1651422,1651844,1652926,1652939-1652940,1652973,1653798,1653817,1653841,1654042,1654161,1654736,1654767,1654787,1656592,1659907,1662986,1663265,1663278,1663325,1663535,1663567,1663679,1663997,1664175,1664321,1664872,1665061,1665086,1666027,1666395,1666503,1666506,1666560,1666570,1666581,1666759,1666967,1666988,1667553
 
-1667555,1667558,1667617,1667633,1667637,1667747,1667767,1667873,1668028,1668137,1668634,1669432,1669801,1669840,1669895-1669896,1670398,1670435,1670592,1670605-1670607,1670609,1670632,1670720,1670725,1670727,1670731,1671114,1672273,1672285,1673759,1674220,1674295,1675469,1675488,1675595,1675831,1676232,1676367-1676369,1676382,1676394,1676483,1676556,1676635,1678178,1679536,1679988,1680256,1681124,1681182,1681730,1681840,1681864,1681869,1682010,1682034,1682047,1682052-1682053,1682062,1682064,1682070,1682312,1682325,1682331,1682386,1684367,1684385,1685759,1685774,1685827,1685892,1687341,1688904,1689358,1689657,1689921,1692850,1693093,1693108,1693324,1694060,1694115,1694291,1694427,1694431,1694503,1694549,1694789,1694873,1694881,1695356,1695372,1695823-1695825,1696200,1696281,1696379,1696468,1700608,1700871,1700897,1700978,1701094,1701124,1701608,1701668,1701676,1701766,1701944,1702248,1702252,1702314,1702390,1702723,1702725,1702728,1702730,1702733,1702735,1702737,1702739,1702742,1702
 
744,1702748,1702751,1702754,1702758,1702760,1702763,1702766,1708779,1708782,1708806,1709314,1709670,1710347,1710442,1710448,1710490,1710574,1710578,1712226,1712229,1712235,1712255,1712618,1712649,1712655,1712860,1712899,1712903,1712906,1712913,1712926,1712975,1713185,1713262,1713287,1713613,1713621,1713872,1713976,1713994,1713998,1714004,1714013,1714059,1714538,1714580,1715189,1715207,1715544,1715549,1715637,1715639-1715645,1715667,1715683,1715866,1715978,1715981,1716216-1716217,1716355,1716414,1716421,1717208-1717209,1717257,1717283,1717288,1717291,1717421,1717517,1717529,1718797,1718840-1718843,1719348,1719357-1719358,1719400,1719491,1719737,1720235,1720396,1720442,1720446,1720450,1720463,1720658-1720660,1720756,1720816,1721813,1721818,1721831,1721861,1721867,1721882,1722523,1722527,1722800,1722926,1722941,1722997,1723130,1723440,1723488,1723890,1724434,1724674,1724792,1724803,1724902,1725128,1725131,1725154,1725167,1725911,1725921,1725929,1725963-1725965,1725970,1725974,1726171-1
 
726173,1726175,1726179-1726182,1726190-1726191,1726195-1726200,1726203,1726226,1726576,1726630,1726992,1727029,1727037,1727671,1727676,1727900,1728028,1728092,1728439,1728449,1729186,1729362,1731009,1731303,1731867,1731872,1731874,1731876,1731885,1731947,1731955,1731959,1731977,1731984,1732360,1732490,1732672,1732902,1733166,1733603,1733619,1733735,1733752,1733764,1733915,1733941,1733964,1734115,1734133,1734261,1734421,1734531,1736286,1737967,1738173,1738182,1738992,1739039,1739089-1739091,1739294,1739777,1739821,1739981,1740513,1740726,1741019,1741162,1741217,1743647,1743681,1744152,1744272,1746732,1746750
-/tomcat/tc8.5.x/trunk:1735579,1736839,1737199,1737966,1738042,1738044,1738162,1738165,1738178,1739157,1739173,1739177,1739476,1740132,1740521,1740536,1740804,1740811,1740981,1741165,1741174,1741182,1741191,1741203,1741209,1741226,1741233,1741410,1742277,1743118,1743126,1743139-1743140,1743718,1743722,1743724,1744059,1744127,1744151,1744232,1744377,1744687,1744698,1744706,1745228,1746940,1748548,1748716,1749288,1749375,1749668-1749669,1750016
-/tomcat/trunk:1156115-1157160,1157162-1157859,1157862-1157942,1157945-1160347,1160349-1163716,1163718-1166689,1166691-1174340,1174342-1175596,1175598-1175611,1175613-1175932,1175934-1177783,1177785-1177980,1178006-1180720,1180722-1183094,1183096-1187753,1187755,1187775,1187801,1187806,1187809,1187826-1188312,1188314-1188401,1188646-1188840,1188

buildbot failure in on tomcat-8-trunk

2016-06-24 Thread buildbot
The Buildbot has detected a new failure on builder tomcat-8-trunk while 
building . Full details are available at:
https://ci.apache.org/builders/tomcat-8-trunk/builds/666

Buildbot URL: https://ci.apache.org/

Buildslave for this Build: silvanus_ubuntu

Build Reason: The AnyBranchScheduler scheduler named 'on-tomcat-8-commit' 
triggered this build
Build Source Stamp: [branch tomcat/tc8.0.x/trunk] 1750058
Blamelist: markt

BUILD FAILED: failed compile_1

Sincerely,
 -The Buildbot




-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: Bug that spans tomcat and tomcat-native

2016-06-24 Thread Mark Thomas
On 23/06/2016 22:31, Nate Clark wrote:
> I tried to submit the bug but it seems that I am now unable to access
> bz.apache.org. Since you made it seem like it was important for this
> to be known about here is the info and patches.

Thanks. I'll chat with the infra folks and see if I can get whatever is
going on with BZ for you fixed.

> When performing some bench marking I noticed that the SSL performance
> of large request reads degraded heavily when performed after millions
> of small requests. Basically the setup is in a multi-threaded
> environment, about 200 threads, performing PUT requests using SSL with
> a body of about 4KB and then using 20 threads performing PUT requests
> with a body of 100MB. If the small requests are not performed the
> speed of the large requests in MB/s is about 2x.

Ouch. That is significant.

> I tracked down the issue to ERR_clear_err() blocking on an internal
> lock which protects a hash map of the error states. It seems that the
> hash map was growing unbounded because the states in it were never
> being cleared by a thread when it had completed SSL operations.
> 
> According to OpenSSL documents ERR_remove_thread_state() or
> ERR_remove_state() for versions of OpenSSL less than 1.1.0 needs to be
> invoked prior to a thread exiting. This is not done by threads in the
> native code so the hash table keeps growing and getting larger and
> larger and more expensive to maintain.
> 
> By adding a new native call which invoked ERR_remove_thread_state and
> calling it from AprEndpoint in tomcat I was able to reduce the
> contention on the lock and the performance improved.
> 
> Because of the thread pool I could not find a simple clean way to
> invoke the cleanup before the thread dies so instead I added it to the
> end of the socket processing.
> 
> Here are the patches I used against tomcat-native 1.1.34 and tomcat70:

Thanks.

I'm going to start some local performance testing to confirm I see
similar results and, assuming I do, I'll start looking at fixing this
for 1.2.x/9.0.x and back-porting.

Mark


-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: Bug that spans tomcat and tomcat-native

2016-06-24 Thread Rémy Maucherat
2016-06-24 12:08 GMT+02:00 Mark Thomas :

> Thanks.
>
> I'm going to start some local performance testing to confirm I see
> similar results and, assuming I do, I'll start looking at fixing this
> for 1.2.x/9.0.x and back-porting.
>
> Hum, the fix that was submitted doesn't make sense IMO since writes can be
async, so I don't see a way besides adding the "error clear" thing after
each operation [and we'll remove it once OpenSSL 1.1 is there if it
actually fixes it]. That's assuming this issue is real [I actually never
noticed anything during my many ab runs and they use a lot of threads, so I
have a hard time believing it is significant enough ;) ].

Rémy


[Bug 57665] support x-forwarded-host

2016-06-24 Thread bugzilla
https://bz.apache.org/bugzilla/show_bug.cgi?id=57665

Stefan Fussenegger  changed:

   What|Removed |Added

 CC||s...@molindo.at

--- Comment #3 from Stefan Fussenegger  ---
Created attachment 33985
  --> https://bz.apache.org/bugzilla/attachment.cgi?id=33985&action=edit
patch that adds optional X-Forwarded-Host support

-- 
You are receiving this mail because:
You are the assignee for the bug.

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



[Bug 57665] support x-forwarded-host

2016-06-24 Thread bugzilla
https://bz.apache.org/bugzilla/show_bug.cgi?id=57665

--- Comment #4 from Stefan Fussenegger  ---
The patch adds support for a hostHeader that works analogue to the existing
portHeader. It's disabled by default, keeping backward compatibility. Setting
it to a value like X-Forwarded-Host will override the value returned by
ServletRequest.getServerName()

-- 
You are receiving this mail because:
You are the assignee for the bug.

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



[Bug 59752] New: Migration guide issues

2016-06-24 Thread bugzilla
https://bz.apache.org/bugzilla/show_bug.cgi?id=59752

Bug ID: 59752
   Summary: Migration guide issues
   Product: Tomcat 7
   Version: unspecified
  Hardware: PC
OS: Linux
Status: NEW
  Severity: normal
  Priority: P2
 Component: Documentation
  Assignee: dev@tomcat.apache.org
  Reporter: csuth...@redhat.com

https://tomcat.apache.org/tomcat-7.0-doc/config/manager.html#Standard_Implementation

"secureRandomAlgoithm" should be "secureRandomAlgorithm"

Also, are the links in the following line meant to point to the filter/valve
page?

internalProxies, trustedProxies attributes in [RemoteIpFilter],
[RemoteIpValve];

Why not have a link to the specific section? Like
https://tomcat.apache.org/tomcat-7.0-doc/config/valve.html#Remote_IP_Valve
instead of https://tomcat.apache.org/tomcat-7.0-doc/config/valve.html for the
RemoteIpValve link.

-- 
You are receiving this mail because:
You are the assignee for the bug.

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: Bug that spans tomcat and tomcat-native

2016-06-24 Thread therealneworld
On Fri, Jun 24, 2016 at 6:17 AM, Rémy Maucherat  wrote:
> 2016-06-24 12:08 GMT+02:00 Mark Thomas :
>
>> Thanks.
>>
>> I'm going to start some local performance testing to confirm I see
>> similar results and, assuming I do, I'll start looking at fixing this
>> for 1.2.x/9.0.x and back-porting.
>>
>> Hum, the fix that was submitted doesn't make sense IMO since writes can be
> async, so I don't see a way besides adding the "error clear" thing after
> each operation [and we'll remove it once OpenSSL 1.1 is there if it
> actually fixes it]. That's assuming this issue is real [I actually never
> noticed anything during my many ab runs and they use a lot of threads, so I
> have a hard time believing it is significant enough ;) ].
>

One thing about the system on which this is running is that it has a
10G nic. So the slow case is about 350MB/s and the fast one is 700MB/s
so you would need a 10G interface or use loop back to even notice the
issue assuming the CPU on the system can push that much encrypted
data.

-nate

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: Bug that spans tomcat and tomcat-native

2016-06-24 Thread Mark Thomas
On 24/06/2016 11:17, Rémy Maucherat wrote:
> 2016-06-24 12:08 GMT+02:00 Mark Thomas :
> 
>> Thanks.
>>
>> I'm going to start some local performance testing to confirm I see
>> similar results and, assuming I do, I'll start looking at fixing this
>> for 1.2.x/9.0.x and back-porting.
>>
> Hum, the fix that was submitted doesn't make sense IMO since writes can be
> async, so I don't see a way besides adding the "error clear" thing after
> each operation [and we'll remove it once OpenSSL 1.1 is there if it
> actually fixes it]. That's assuming this issue is real [I actually never
> noticed anything during my many ab runs and they use a lot of threads, so I
> have a hard time believing it is significant enough ;) ].

I haven't been able to reproduce anything like this yet. So far I have
only been testing with tc-native 1.2.x and Tomcat 9.0.x. I might need to
test with 1.1.x and Tomcat 7.0.x, the versions used by the OP.

I'm having trouble understanding how this is happening. I could imagine
that HashMap becoming a problem if there was a high churn in Threads.
I'm thinking of something like bursty traffic levels and an executor
aggressively halting spare threads. I need to experiment with that as well.

Nate,

We need as much information as you can provide on how to reproduce this.
As a minimum we need to know:
- Connector configuration from server.xml
- Operating system
- How tc-native was built
- Exact versions for everything

We need enough information to recreate the test and the results
that you obtained.

Mark


-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: Bug that spans tomcat and tomcat-native

2016-06-24 Thread Mark Thomas
On 24/06/2016 15:40, therealnewo...@gmail.com wrote:
> On Fri, Jun 24, 2016 at 6:17 AM, Rémy Maucherat  wrote:
>> 2016-06-24 12:08 GMT+02:00 Mark Thomas :
>>
>>> Thanks.
>>>
>>> I'm going to start some local performance testing to confirm I see
>>> similar results and, assuming I do, I'll start looking at fixing this
>>> for 1.2.x/9.0.x and back-porting.
>>>
>>> Hum, the fix that was submitted doesn't make sense IMO since writes can be
>> async, so I don't see a way besides adding the "error clear" thing after
>> each operation [and we'll remove it once OpenSSL 1.1 is there if it
>> actually fixes it]. That's assuming this issue is real [I actually never
>> noticed anything during my many ab runs and they use a lot of threads, so I
>> have a hard time believing it is significant enough ;) ].
>>
> 
> One thing about the system on which this is running is that it has a
> 10G nic. So the slow case is about 350MB/s and the fast one is 700MB/s
> so you would need a 10G interface or use loop back to even notice the
> issue assuming the CPU on the system can push that much encrypted
> data.

Roughly how many requests do you have to do for the problem to appear?
Is keep-alive enabled or disabled?

(plus the questions from my other mail)

Mark


-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: Bug that spans tomcat and tomcat-native

2016-06-24 Thread therealneworld
On Fri, Jun 24, 2016 at 11:18 AM, Mark Thomas  wrote:
> On 24/06/2016 11:17, Rémy Maucherat wrote:
>> 2016-06-24 12:08 GMT+02:00 Mark Thomas :
>>
>>> Thanks.
>>>
>>> I'm going to start some local performance testing to confirm I see
>>> similar results and, assuming I do, I'll start looking at fixing this
>>> for 1.2.x/9.0.x and back-porting.
>>>
>> Hum, the fix that was submitted doesn't make sense IMO since writes can be
>> async, so I don't see a way besides adding the "error clear" thing after
>> each operation [and we'll remove it once OpenSSL 1.1 is there if it
>> actually fixes it]. That's assuming this issue is real [I actually never
>> noticed anything during my many ab runs and they use a lot of threads, so I
>> have a hard time believing it is significant enough ;) ].
>
> I haven't been able to reproduce anything like this yet. So far I have
> only been testing with tc-native 1.2.x and Tomcat 9.0.x. I might need to
> test with 1.1.x and Tomcat 7.0.x, the versions used by the OP.
>
> I'm having trouble understanding how this is happening. I could imagine
> that HashMap becoming a problem if there was a high churn in Threads.
> I'm thinking of something like bursty traffic levels and an executor
> aggressively halting spare threads. I need to experiment with that as well.
>
> Nate,
>
> We need as much information as you can provide on how to reproduce this.
> As a minimum we need to know:
> - Connector configuration from server.xml
> - Operating system
> - How tc-native was built
> - Exact versions for everything
>
> We need enough information to recreate the test and the results
> that you obtained.

Connector configuration:


Keepalive is enabled.

OS: Fedora 22
tc-native: tomcat-native-1.1.34-1.fc22.x86_64
tomcat: tomcat-7.0.68-3.fc22.noarch

This issue was seen in older versions of tomcat:
tomcat-native-1.1.30-2.fc21 and tomcat-7.0.54-3.fc21

All of the builds are the rpms released by fedora from their build machines.

The test I ran performed about 5 million 4k requests and then did the
large 100M requests and was able to see the issue immediately.

-nate

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: Bug that spans tomcat and tomcat-native

2016-06-24 Thread therealneworld
On Fri, Jun 24, 2016 at 11:37 AM,   wrote:
> On Fri, Jun 24, 2016 at 11:18 AM, Mark Thomas  wrote:
>> On 24/06/2016 11:17, Rémy Maucherat wrote:
>>> 2016-06-24 12:08 GMT+02:00 Mark Thomas :
>>>
 Thanks.

 I'm going to start some local performance testing to confirm I see
 similar results and, assuming I do, I'll start looking at fixing this
 for 1.2.x/9.0.x and back-porting.

>>> Hum, the fix that was submitted doesn't make sense IMO since writes can be
>>> async, so I don't see a way besides adding the "error clear" thing after
>>> each operation [and we'll remove it once OpenSSL 1.1 is there if it
>>> actually fixes it]. That's assuming this issue is real [I actually never
>>> noticed anything during my many ab runs and they use a lot of threads, so I
>>> have a hard time believing it is significant enough ;) ].
>>
>> I haven't been able to reproduce anything like this yet. So far I have
>> only been testing with tc-native 1.2.x and Tomcat 9.0.x. I might need to
>> test with 1.1.x and Tomcat 7.0.x, the versions used by the OP.
>>
>> I'm having trouble understanding how this is happening. I could imagine
>> that HashMap becoming a problem if there was a high churn in Threads.
>> I'm thinking of something like bursty traffic levels and an executor
>> aggressively halting spare threads. I need to experiment with that as well.
>>
>> Nate,
>>
>> We need as much information as you can provide on how to reproduce this.
>> As a minimum we need to know:
>> - Connector configuration from server.xml
>> - Operating system
>> - How tc-native was built
>> - Exact versions for everything
>>
>> We need enough information to recreate the test and the results
>> that you obtained.
> OS: Fedora 22
> tc-native: tomcat-native-1.1.34-1.fc22.x86_64
> tomcat: tomcat-7.0.68-3.fc22.noarch
>
> This issue was seen in older versions of tomcat:
> tomcat-native-1.1.30-2.fc21 and tomcat-7.0.54-3.fc21

I forgot to give you the openssl version openssl-1.0.1k-15.fc22.x86_64

-nate

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: Bug that spans tomcat and tomcat-native

2016-06-24 Thread therealneworld
On Fri, Jun 24, 2016 at 6:17 AM, Rémy Maucherat  wrote:
> 2016-06-24 12:08 GMT+02:00 Mark Thomas :
>
>> Thanks.
>>
>> I'm going to start some local performance testing to confirm I see
>> similar results and, assuming I do, I'll start looking at fixing this
>> for 1.2.x/9.0.x and back-porting.
>>
>> Hum, the fix that was submitted doesn't make sense IMO since writes can be
> async, so I don't see a way besides adding the "error clear" thing after
> each operation [and we'll remove it once OpenSSL 1.1 is there if it
> actually fixes it]. That's assuming this issue is real [I actually never
> noticed anything during my many ab runs and they use a lot of threads, so I
> have a hard time believing it is significant enough ;) ].

I was not using async IO so I did not account for that in my patch. It
was more a case of see if I can resolve this issue for my use case.

-nate

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: Bug that spans tomcat and tomcat-native

2016-06-24 Thread therealneworld
On Fri, Jun 24, 2016 at 11:18 AM, Mark Thomas  wrote:
> On 24/06/2016 11:17, Rémy Maucherat wrote:
>> 2016-06-24 12:08 GMT+02:00 Mark Thomas :
>>
>>> Thanks.
>>>
>>> I'm going to start some local performance testing to confirm I see
>>> similar results and, assuming I do, I'll start looking at fixing this
>>> for 1.2.x/9.0.x and back-porting.
>>>
>> Hum, the fix that was submitted doesn't make sense IMO since writes can be
>> async, so I don't see a way besides adding the "error clear" thing after
>> each operation [and we'll remove it once OpenSSL 1.1 is there if it
>> actually fixes it]. That's assuming this issue is real [I actually never
>> noticed anything during my many ab runs and they use a lot of threads, so I
>> have a hard time believing it is significant enough ;) ].
>
> I haven't been able to reproduce anything like this yet. So far I have
> only been testing with tc-native 1.2.x and Tomcat 9.0.x. I might need to
> test with 1.1.x and Tomcat 7.0.x, the versions used by the OP.
>
> I'm having trouble understanding how this is happening. I could imagine
> that HashMap becoming a problem if there was a high churn in Threads.
> I'm thinking of something like bursty traffic levels and an executor
> aggressively halting spare threads. I need to experiment with that as well.
>

I do not understand it either. Using the thread pool there is not much
thread churn so I am not sure why the problem gets as bad as it does.
I didn't look into what the hash table actually had in it. I just
noticed that the majority of a read threads time was spent waiting for
the lock to access this hash table. Once I added the call to
ERR_remove_thread_state the waiting basically disappeared.

For this test the traffic is constant. Each client thread creates one
connection and just keeps pushing requests for set number of requests,
so we aren't even creating new connections.

-nate

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: Bug that spans tomcat and tomcat-native

2016-06-24 Thread Mark Thomas
On 24/06/2016 18:11, therealnewo...@gmail.com wrote:
> On Fri, Jun 24, 2016 at 11:18 AM, Mark Thomas  wrote:
>> On 24/06/2016 11:17, Rémy Maucherat wrote:
>>> 2016-06-24 12:08 GMT+02:00 Mark Thomas :
>>>
 Thanks.

 I'm going to start some local performance testing to confirm I see
 similar results and, assuming I do, I'll start looking at fixing this
 for 1.2.x/9.0.x and back-porting.

>>> Hum, the fix that was submitted doesn't make sense IMO since writes can be
>>> async, so I don't see a way besides adding the "error clear" thing after
>>> each operation [and we'll remove it once OpenSSL 1.1 is there if it
>>> actually fixes it]. That's assuming this issue is real [I actually never
>>> noticed anything during my many ab runs and they use a lot of threads, so I
>>> have a hard time believing it is significant enough ;) ].
>>
>> I haven't been able to reproduce anything like this yet. So far I have
>> only been testing with tc-native 1.2.x and Tomcat 9.0.x. I might need to
>> test with 1.1.x and Tomcat 7.0.x, the versions used by the OP.
>>
>> I'm having trouble understanding how this is happening. I could imagine
>> that HashMap becoming a problem if there was a high churn in Threads.
>> I'm thinking of something like bursty traffic levels and an executor
>> aggressively halting spare threads. I need to experiment with that as well.
>>
> 
> I do not understand it either. Using the thread pool there is not much
> thread churn so I am not sure why the problem gets as bad as it does.
> I didn't look into what the hash table actually had in it. I just
> noticed that the majority of a read threads time was spent waiting for
> the lock to access this hash table. Once I added the call to
> ERR_remove_thread_state the waiting basically disappeared.
> 
> For this test the traffic is constant. Each client thread creates one
> connection and just keeps pushing requests for set number of requests,
> so we aren't even creating new connections.

Can you provide the settings you are using for the Executor as well please?

Thanks,

Mark


-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: Bug that spans tomcat and tomcat-native

2016-06-24 Thread Mark Thomas
On 24/06/2016 18:25, Mark Thomas wrote:
> On 24/06/2016 18:11, therealnewo...@gmail.com wrote:
>> On Fri, Jun 24, 2016 at 11:18 AM, Mark Thomas  wrote:
>>> On 24/06/2016 11:17, Rémy Maucherat wrote:
 2016-06-24 12:08 GMT+02:00 Mark Thomas :

> Thanks.
>
> I'm going to start some local performance testing to confirm I see
> similar results and, assuming I do, I'll start looking at fixing this
> for 1.2.x/9.0.x and back-porting.
>
 Hum, the fix that was submitted doesn't make sense IMO since writes can be
 async, so I don't see a way besides adding the "error clear" thing after
 each operation [and we'll remove it once OpenSSL 1.1 is there if it
 actually fixes it]. That's assuming this issue is real [I actually never
 noticed anything during my many ab runs and they use a lot of threads, so I
 have a hard time believing it is significant enough ;) ].
>>>
>>> I haven't been able to reproduce anything like this yet. So far I have
>>> only been testing with tc-native 1.2.x and Tomcat 9.0.x. I might need to
>>> test with 1.1.x and Tomcat 7.0.x, the versions used by the OP.
>>>
>>> I'm having trouble understanding how this is happening. I could imagine
>>> that HashMap becoming a problem if there was a high churn in Threads.
>>> I'm thinking of something like bursty traffic levels and an executor
>>> aggressively halting spare threads. I need to experiment with that as well.
>>>
>>
>> I do not understand it either. Using the thread pool there is not much
>> thread churn so I am not sure why the problem gets as bad as it does.
>> I didn't look into what the hash table actually had in it. I just
>> noticed that the majority of a read threads time was spent waiting for
>> the lock to access this hash table. Once I added the call to
>> ERR_remove_thread_state the waiting basically disappeared.
>>
>> For this test the traffic is constant. Each client thread creates one
>> connection and just keeps pushing requests for set number of requests,
>> so we aren't even creating new connections.
> 
> Can you provide the settings you are using for the Executor as well please?

And how long do the initial 5,000,000 4k requests take to process?

Thanks,

Mark


-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: Bug that spans tomcat and tomcat-native

2016-06-24 Thread Nate Clark
On Fri, Jun 24, 2016 at 1:27 PM, Mark Thomas  wrote:
> On 24/06/2016 18:25, Mark Thomas wrote:
>>
>> Can you provide the settings you are using for the Executor as well please?



>
> And how long do the initial 5,000,000 4k requests take to process?
>

40 minutes.

-nate

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: Bug that spans tomcat and tomcat-native

2016-06-24 Thread Nate Clark
On Fri, Jun 24, 2016 at 1:37 PM, Nate Clark  wrote:
> On Fri, Jun 24, 2016 at 1:27 PM, Mark Thomas  wrote:
>> On 24/06/2016 18:25, Mark Thomas wrote:
>>>
>>> Can you provide the settings you are using for the Executor as well please?
>
>  maxThreads="500" minSpareThreads="4"/>
>
>>
>> And how long do the initial 5,000,000 4k requests take to process?
>>
>
> 40 minutes.
>
Not sure this matters but I just double checked and there are actually
400 threads in total doing the 4k PUTs. Two clients each doing 200
threads. the 100MB test is 24 threads total 12 per client machine.

Sorry for misinformation earlier.

-nate

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: Bug that spans tomcat and tomcat-native

2016-06-24 Thread Mark Thomas
On 24/06/2016 18:41, Nate Clark wrote:
> On Fri, Jun 24, 2016 at 1:37 PM, Nate Clark  wrote:
>> On Fri, Jun 24, 2016 at 1:27 PM, Mark Thomas  wrote:
>>> On 24/06/2016 18:25, Mark Thomas wrote:

 Can you provide the settings you are using for the Executor as well please?
>>
>> > maxThreads="500" minSpareThreads="4"/>
>>
>>>
>>> And how long do the initial 5,000,000 4k requests take to process?
>>>
>>
>> 40 minutes.
>>
> Not sure this matters but I just double checked and there are actually
> 400 threads in total doing the 4k PUTs. Two clients each doing 200
> threads. the 100MB test is 24 threads total 12 per client machine.
> 
> Sorry for misinformation earlier.

No problem. Thanks for the information. One last question (for now). How
many processors / cores / threads does the server support? I'm trying to
get a handle on what the concurrency looks like.

Thanks,

Mark


-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: Bug that spans tomcat and tomcat-native

2016-06-24 Thread therealneworld
On Fri, Jun 24, 2016 at 2:07 PM, Mark Thomas  wrote:
> On 24/06/2016 18:41, Nate Clark wrote:
>> On Fri, Jun 24, 2016 at 1:37 PM, Nate Clark  wrote:
>>> On Fri, Jun 24, 2016 at 1:27 PM, Mark Thomas  wrote:
 On 24/06/2016 18:25, Mark Thomas wrote:
>
> Can you provide the settings you are using for the Executor as well 
> please?
>>>
>>> >> maxThreads="500" minSpareThreads="4"/>
>>>

 And how long do the initial 5,000,000 4k requests take to process?

>>>
>>> 40 minutes.
>>>
>> Not sure this matters but I just double checked and there are actually
>> 400 threads in total doing the 4k PUTs. Two clients each doing 200
>> threads. the 100MB test is 24 threads total 12 per client machine.
>>
>> Sorry for misinformation earlier.
>
> No problem. Thanks for the information. One last question (for now). How
> many processors / cores / threads does the server support? I'm trying to
> get a handle on what the concurrency looks like.
>

The machine has two physical chips each with 6 cores and
hyper-threading enabled, so 24 cores exposed to the OS.

cpuinfo for first core:
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model   : 63
model name  : Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
stepping: 2
microcode   : 0x2e
cpu MHz : 1212.656
cache size  : 15360 KB
physical id : 0
siblings: 12
core id : 0
cpu cores   : 6
apicid  : 0
initial apicid  : 0
fpu : yes
fpu_exception   : yes
cpuid level : 15
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe
syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts
rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq
dtes64 monitor ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid
dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx
f16c rdrand lahf_lm abm ida arat epb pln pts dtherm tpr_shadow vnmi
flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms
invpcid cqm xsaveopt cqm_llc cqm_occup_llc
bugs:
bogomips: 4788.98
clflush size: 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:


If it matters the system also has 256GB of memory.

-nate

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: Bug that spans tomcat and tomcat-native

2016-06-24 Thread Mark Thomas
On 24/06/2016 20:01, therealnewo...@gmail.com wrote:
> On Fri, Jun 24, 2016 at 2:07 PM, Mark Thomas  wrote:
>> On 24/06/2016 18:41, Nate Clark wrote:
>>> On Fri, Jun 24, 2016 at 1:37 PM, Nate Clark  wrote:
 On Fri, Jun 24, 2016 at 1:27 PM, Mark Thomas  wrote:
> On 24/06/2016 18:25, Mark Thomas wrote:
>>
>> Can you provide the settings you are using for the Executor as well 
>> please?

 >>> maxThreads="500" minSpareThreads="4"/>

>
> And how long do the initial 5,000,000 4k requests take to process?
>

 40 minutes.

>>> Not sure this matters but I just double checked and there are actually
>>> 400 threads in total doing the 4k PUTs. Two clients each doing 200
>>> threads. the 100MB test is 24 threads total 12 per client machine.
>>>
>>> Sorry for misinformation earlier.
>>
>> No problem. Thanks for the information. One last question (for now). How
>> many processors / cores / threads does the server support? I'm trying to
>> get a handle on what the concurrency looks like.
>>
> 
> The machine has two physical chips each with 6 cores and
> hyper-threading enabled, so 24 cores exposed to the OS.

Thanks.



> If it matters the system also has 256GB of memory.

I don't think RAM is playing a role here but it is still good to know.

In terms of next steps, I want to see if I can come up with a theory
that matches what you are observing. From that we can then assess
whether the proposed patch can be improved.

Apologies for the drip-feeding of questions. As I learn a bit more, a
few more questions come to mind.

I'm wondering if this is a problem that builds up over time. If I
understood your previous posts correctly, running the big tests
immediately gave ~700MB/s whereas running the small tests then the big
tests resulting in ~350MB/s during the big tests. Are you able to
experiment with this a little bit? For example, if you do big tests, 1M
(~20%) small tests, big tests, 1M small tests, big tests etc. What is
the data rate for the big tests after 0, 1M, 2M, 3M, 4M and 5M little tests.

What I am trying to pin down is how quickly does this problem build up.

Also, do you see any failed requests or do they all succeed?

Thanks again,

Mark



> 
> -nate
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: dev-h...@tomcat.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: Bug that spans tomcat and tomcat-native

2016-06-24 Thread therealneworld
On Fri, Jun 24, 2016 at 3:21 PM, Mark Thomas  wrote:
> On 24/06/2016 20:01, therealnewo...@gmail.com wrote:
>> On Fri, Jun 24, 2016 at 2:07 PM, Mark Thomas  wrote:
>>> On 24/06/2016 18:41, Nate Clark wrote:
 On Fri, Jun 24, 2016 at 1:37 PM, Nate Clark  wrote:
> On Fri, Jun 24, 2016 at 1:27 PM, Mark Thomas  wrote:
>> On 24/06/2016 18:25, Mark Thomas wrote:
>>>
>>> Can you provide the settings you are using for the Executor as well 
>>> please?
>
>  maxThreads="500" minSpareThreads="4"/>
>
>>
>> And how long do the initial 5,000,000 4k requests take to process?
>>
>
> 40 minutes.
>
 Not sure this matters but I just double checked and there are actually
 400 threads in total doing the 4k PUTs. Two clients each doing 200
 threads. the 100MB test is 24 threads total 12 per client machine.

 Sorry for misinformation earlier.
>>>
>>> No problem. Thanks for the information. One last question (for now). How
>>> many processors / cores / threads does the server support? I'm trying to
>>> get a handle on what the concurrency looks like.
>>>
>>
>> The machine has two physical chips each with 6 cores and
>> hyper-threading enabled, so 24 cores exposed to the OS.
>
> Thanks.
>
> 
>
>> If it matters the system also has 256GB of memory.
>
> I don't think RAM is playing a role here but it is still good to know.
>
> In terms of next steps, I want to see if I can come up with a theory
> that matches what you are observing. From that we can then assess
> whether the proposed patch can be improved.
>
> Apologies for the drip-feeding of questions. As I learn a bit more, a
> few more questions come to mind.
>
> I'm wondering if this is a problem that builds up over time. If I
> understood your previous posts correctly, running the big tests
> immediately gave ~700MB/s whereas running the small tests then the big
> tests resulting in ~350MB/s during the big tests. Are you able to
> experiment with this a little bit? For example, if you do big tests, 1M
> (~20%) small tests, big tests, 1M small tests, big tests etc. What is
> the data rate for the big tests after 0, 1M, 2M, 3M, 4M and 5M little tests.

Sure I can try that. For the in between tests do you want me to run
those for a set amount of time or number of files? Like each smaller
batch like 20min and then 10min of large and then next smaller size?

> What I am trying to pin down is how quickly does this problem build up.
>
> Also, do you see any failed requests or do they all succeed?

All successes.

-nate

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: Bug that spans tomcat and tomcat-native

2016-06-24 Thread therealneworld
On Fri, Jun 24, 2016 at 4:52 PM,   wrote:
> On Fri, Jun 24, 2016 at 3:21 PM, Mark Thomas  wrote:
>> On 24/06/2016 20:01, therealnewo...@gmail.com wrote:
>>> On Fri, Jun 24, 2016 at 2:07 PM, Mark Thomas  wrote:
 On 24/06/2016 18:41, Nate Clark wrote:
> On Fri, Jun 24, 2016 at 1:37 PM, Nate Clark  wrote:
>> On Fri, Jun 24, 2016 at 1:27 PM, Mark Thomas  wrote:
>>> On 24/06/2016 18:25, Mark Thomas wrote:

 Can you provide the settings you are using for the Executor as well 
 please?
>>
>> > maxThreads="500" minSpareThreads="4"/>
>>
>>>
>>> And how long do the initial 5,000,000 4k requests take to process?
>>>
>>
>> 40 minutes.
>>
> Not sure this matters but I just double checked and there are actually
> 400 threads in total doing the 4k PUTs. Two clients each doing 200
> threads. the 100MB test is 24 threads total 12 per client machine.
>
> Sorry for misinformation earlier.

 No problem. Thanks for the information. One last question (for now). How
 many processors / cores / threads does the server support? I'm trying to
 get a handle on what the concurrency looks like.

>>>
>>> The machine has two physical chips each with 6 cores and
>>> hyper-threading enabled, so 24 cores exposed to the OS.
>>
>> Thanks.
>>
>> 
>>
>>> If it matters the system also has 256GB of memory.
>>
>> I don't think RAM is playing a role here but it is still good to know.
>>
>> In terms of next steps, I want to see if I can come up with a theory
>> that matches what you are observing. From that we can then assess
>> whether the proposed patch can be improved.
>>
>> Apologies for the drip-feeding of questions. As I learn a bit more, a
>> few more questions come to mind.
>>
>> I'm wondering if this is a problem that builds up over time. If I
>> understood your previous posts correctly, running the big tests
>> immediately gave ~700MB/s whereas running the small tests then the big
>> tests resulting in ~350MB/s during the big tests. Are you able to
>> experiment with this a little bit? For example, if you do big tests, 1M
>> (~20%) small tests, big tests, 1M small tests, big tests etc. What is
>> the data rate for the big tests after 0, 1M, 2M, 3M, 4M and 5M little tests.
>
> Sure I can try that. For the in between tests do you want me to run
> those for a set amount of time or number of files? Like each smaller
> batch like 20min and then 10min of large and then next smaller size?

Ignore that question. I misinterpreted your 1m to be 1MB.

-nate

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: SSL errors with tc-native

2016-06-24 Thread Christopher Schultz
Mark,

On 6/22/16 9:06 AM, Mark Thomas wrote:
> On 22/06/2016 14:01, Rainer Jung wrote:
>> Hi Mark,
>>
>> Am 22.06.2016 um 13:20 schrieb Mark Thomas:
>>> A while ago I observed unexpected APR_EGENERAL errors being returned
>>> when performing SSL reads. I was unable to identify the root cause but I
>>> did discover that if those errors were treated as EAGAIN, processing
>>> continued normally. As a result, I committed [1].
>>>
>>> A report on the users list [2] has highlighted that, in some
>>> circumstances at least, [1] is the wrong thing to do. I have therefore
>>> investigated the circumstances that led to [1] further. The relevant
>>> code is [3].
>>>
>>> With some local debug logging I have discovered that in the case [1] was
>>> trying to address the result of the SSL_read call at [3] is as follows:
>>> s == -1
>>> i == 5 (SSL_ERROR_SYSCALL)
>>> rv = 730035
>>>
>>> Subtracting the 72 offset from rv gives the OS error as 10035 which
>>> [4] gives as WSAEWOULDBLOCK which looks exactly like EAGAIN to me.
>>>
>>> Based on the above, my conclusion is that [2] was caused by some other
>>> windows error which was incorrectly handled as EAGAIN.
>>>
>>> Therefore, I'd like to propose something along the following lines for
>>> tc-native:
>>> Index: src/sslnetwork.c
>>> ===
>>> --- src/sslnetwork.c(revision 1749592)
>>> +++ src/sslnetwork.c(working copy)
>>> @@ -427,7 +427,11 @@
>>>  con->shutdown_type = SSL_SHUTDOWN_TYPE_STANDARD;
>>>  return APR_EOF;
>>>  }
>>> -#if !defined(_WIN32)
>>> +#if defined(_WIN32)
>>> +else if (rv == 730035 && timeout == 0) {
>>> +return APR_EAGAIN;
>>> +}
>>> +#else
>>>  else if (APR_STATUS_IS_EINTR(rv)) {
>>>  /* Interrupted by signal
>>>   */
>>
>> ... and reverting [1] ?
> 
> Yes.
> 
>>> I'd appreciate some review of this change as I know C is not my strong
>>> point. The hard-coded value for the test of rv looks wrong to me. Is
>>> there a better way to do this? Any other review comments?
>>>
>>> Obviously some changes will be required on the Tomcat side as well. I'm
>>> still looking at those as I think I have discovered another issue that
>>> was masked by [1].
>>
>> File apr_errno.h contains a macro
>>
>>   APR_STATUS_IS_EAGAIN(s)
>>
>> which in the WIN32 case is defined as:
>>
>>   ((s) == APR_EAGAIN \
>>   || (s) == APR_OS_START_SYSERR + ERROR_NO_DATA \
>>   || (s) == APR_OS_START_SYSERR + ERROR_NO_PROC_SLOTS \
>>   || (s) == APR_OS_START_SYSERR + ERROR_NESTING_NOT_ALLOWED \
>>   || (s) == APR_OS_START_SYSERR + ERROR_MAX_THRDS_REACHED \
>>   || (s) == APR_OS_START_SYSERR + ERROR_LOCK_VIOLATION \
>>   || (s) == APR_OS_START_SYSERR + WSAEWOULDBLOCK)
>>
>> so one could use "APR_STATUS_IS_EAGAIN(rv)" instead of "rv == 730035" as
>> a condition. This would be broader than your suggestion. Whether it
>> still separates the case the user observed from the one you want to
>> handle with could maybe be tested by the user in [2]?
> 
> That is much better.
> 
>> Finally if using the macro, one could also likely just drop the "#if
>> defined(_WIN32)" before the EAGAIN test, because the macro is also
>> defined for other platforms. For Unix/Linux either as "((s) ==
>> APR_EAGAIN)" or "((s) == APR_EAGAIN || (s) == EWOULDBLOCK)".
> 
> Makes sense.
> 
> Thanks for the review. I'll adjust the patch accordingly and commit.

I was gong to make a very similar suggestion; glad Rainer spoke up. That
magic number looked like a landmine ready to explode, or, worse yet, be
one of those lines of code nobody ever wants to touch in the future
because they are scared to break something... but nobody knows what it's
there for :)

Don't sell your C skills too short, Mark ;)

-chris



signature.asc
Description: OpenPGP digital signature


Re: Bug that spans tomcat and tomcat-native

2016-06-24 Thread Mark Thomas
On 24/06/2016 21:52, therealnewo...@gmail.com wrote:

>> I'm wondering if this is a problem that builds up over time. If I
>> understood your previous posts correctly, running the big tests
>> immediately gave ~700MB/s whereas running the small tests then the big
>> tests resulting in ~350MB/s during the big tests. Are you able to
>> experiment with this a little bit? For example, if you do big tests, 1M
>> (~20%) small tests, big tests, 1M small tests, big tests etc. What is
>> the data rate for the big tests after 0, 1M, 2M, 3M, 4M and 5M little tests.
> 
> Sure I can try that. For the in between tests do you want me to run
> those for a set amount of time or number of files? Like each smaller
> batch like 20min and then 10min of large and then next smaller size?

I was thinking set number of files.

I would also be useful to know how many threads the executor has created
at each point as well. (JMX should tell you that. You might need to
adjust the executor so it doesn't stop idle threads.).

>> What I am trying to pin down is how quickly does this problem build up.
>>
>> Also, do you see any failed requests or do they all succeed?
> 
> All successes.

OK. That rules out some possibilities.

Going back to your original description, you said you saw blocking
during the call to ERR_clear_err(). Did you mean ERR_clear_error()?
Either way, could you provide the full stack trace of an example blocked
thread? And, ideally, the stack trace of the thread currently holding
the lock? I'm still trying to understand what is going on here since
based on my understanding of the code so far, the HashMap is bounded (to
the number of threads) and should reach that limit fairly quickly.

Thanks,

Mark

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: Bug that spans tomcat and tomcat-native

2016-06-24 Thread Christopher Schultz
Nate,

On 6/24/16 10:40 AM, therealnewo...@gmail.com wrote:
> On Fri, Jun 24, 2016 at 6:17 AM, Rémy Maucherat  wrote:
>> 2016-06-24 12:08 GMT+02:00 Mark Thomas :
>>
>>> Thanks.
>>>
>>> I'm going to start some local performance testing to confirm I see
>>> similar results and, assuming I do, I'll start looking at fixing this
>>> for 1.2.x/9.0.x and back-porting.
>>>
>>> Hum, the fix that was submitted doesn't make sense IMO since writes can be
>> async, so I don't see a way besides adding the "error clear" thing after
>> each operation [and we'll remove it once OpenSSL 1.1 is there if it
>> actually fixes it]. That's assuming this issue is real [I actually never
>> noticed anything during my many ab runs and they use a lot of threads, so I
>> have a hard time believing it is significant enough ;) ].
>>
> 
> One thing about the system on which this is running is that it has a
> 10G nic. So the slow case is about 350MB/s and the fast one is 700MB/s
> so you would need a 10G interface or use loop back to even notice the
> issue assuming the CPU on the system can push that much encrypted
> data.

I believe Jean-Frederic Clere (works with Rémy) has been using 10G
interfaces for the Tomcat performance testing, including using APR. I
don't have his slides handy... perhaps they show an unexplained
performance drop when using APR/tcnative? Or perhaps they show no
performance drop (relative to httpd).

That would be interesting to see.

-chris



signature.asc
Description: OpenPGP digital signature


Re: Bug that spans tomcat and tomcat-native

2016-06-24 Thread therealneworld
On Fri, Jun 24, 2016 at 5:31 PM, Mark Thomas  wrote:
> On 24/06/2016 21:52, therealnewo...@gmail.com wrote:
>
>>> I'm wondering if this is a problem that builds up over time. If I
>>> understood your previous posts correctly, running the big tests
>>> immediately gave ~700MB/s whereas running the small tests then the big
>>> tests resulting in ~350MB/s during the big tests. Are you able to
>>> experiment with this a little bit? For example, if you do big tests, 1M
>>> (~20%) small tests, big tests, 1M small tests, big tests etc. What is
>>> the data rate for the big tests after 0, 1M, 2M, 3M, 4M and 5M little tests.
>>
>> Sure I can try that. For the in between tests do you want me to run
>> those for a set amount of time or number of files? Like each smaller
>> batch like 20min and then 10min of large and then next smaller size?
>
> I was thinking set number of files.
>
> I would also be useful to know how many threads the executor has created
> at each point as well. (JMX should tell you that. You might need to
> adjust the executor so it doesn't stop idle threads.).

I saw your message about not stopping idle threads after I already
started things.

1st 100M test:
851348MB/s
Executor:
largestPoolSize: 25
poolSize: 25
1st 4k test
Executor:
largestPoolSize: 401
poolSize: 401
2nd 100M test:
 460147MB/s
Executor:
largestPoolSize: 401
poolSize: 79
2nd 4k test
Executor:
largestPoolSize: 414
poolSize: 414
3rd 100M test:
429127MB/s
Executor:
largestPoolSize: 414
poolSize: 80
3rd 4k test:
Executor:
largestPoolSize: 414
poolSize: 401
4th 100M test:
437918MB/s
Executor:
largestPoolSize: 414
poolSize: 86
4th 4k test:
Executor:
largestPoolSize: 414
poolSize: 401
5th 100M test:
464837MB/s
Executor:
largestPoolSize: 414
poolSize: 87

It looks like the problem occurs right after the first set of 4k puts
and doesn't get any worse so what ever causes the issue happens early.
This is getting stranger and I really can not explain why calling
ERR_remove_thread_state reliably improves performance.

> Going back to your original description, you said you saw blocking
> during the call to ERR_clear_err(). Did you mean ERR_clear_error()?
> Either way, could you provide the full stack trace of an example blocked
> thread? And, ideally, the stack trace of the thread currently holding
> the lock? I'm still trying to understand what is going on here since
> based on my understanding of the code so far, the HashMap is bounded (to
> the number of threads) and should reach that limit fairly quickly.

Sorry, yes I did mean ERR_clear_error().

#0  __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
#1  0x7f0f61ab989d in __GI___pthread_mutex_lock (mutex=0x2b49c58)
at ../nptl/pthread_mutex_lock.c:80
#2  0x7f0f3205f183 in int_thread_get (create=0) at err.c:446
#3  0x7f0f3205f68d in int_thread_get_item (d=0x7f0ca89c7ce0) at err.c:491
#4  0x7f0f32060094 in ERR_get_state () at err.c:1014
#5  0x7f0f320602cf in ERR_clear_error () at err.c:747
#6  0x7f0f325f3579 in ssl_socket_recv (sock=0x7f0dcc391980,
buf=0x7f0eec067820
"lock->199808-Source_filename->rhino_perf_https_lt_100g_a-Loop->1-Count->11089487-11089488-11089489-11089490-11089491-11089492-11089493-11089494-11089495-11089496-11089497-11089498-11089499-11089500-11"...,
len=0x7f0ca89c7ff0) at src/sslnetwork.c:401
#7  0x7f0f325ece99 in Java_org_apache_tomcat_jni_Socket_recvbb
(e=, o=, sock=,
offset=, len=) at src/network.c:957

I tried getting more data but the jvm tends to dump core when gdb is
attached or is going to slow to actually cause the lock contention. I
can reliably see a thread waiting on this lock if I attach to a single
thread and randomly interrupt it and look at the back trace. When I
look at the mutex it has a different owner each time so different
threads are getting the lock.

I will play with this a bit more on Monday.

-nate

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



[Bug 57830] Add support for ProxyProtocol

2016-06-24 Thread bugzilla
https://bz.apache.org/bugzilla/show_bug.cgi?id=57830

SATOH Fumiyasu  changed:

   What|Removed |Added

 CC||fumiyas-u-apa...@sfo.jp

-- 
You are receiving this mail because:
You are the assignee for the bug.

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



[GUMP@vmgump]: Project tomcat-tc8.0.x-test-apr (in module tomcat-8.0.x) failed

2016-06-24 Thread Bill Barker
To whom it may engage...

This is an automated request, but not an unsolicited one. For 
more information please visit http://gump.apache.org/nagged.html, 
and/or contact the folk at gene...@gump.apache.org.

Project tomcat-tc8.0.x-test-apr has an issue affecting its community 
integration.
This issue affects 1 projects,
 and has been outstanding for 7 runs.
The current state of this project is 'Failed', with reason 'Build Timed Out'.
For reference only, the following projects are affected by this:
- tomcat-tc8.0.x-test-apr :  Tomcat 8.x, a web server implementing the Java 
Servlet 3.1,
...


Full details are available at:

http://vmgump.apache.org/gump/public/tomcat-8.0.x/tomcat-tc8.0.x-test-apr/index.html

That said, some information snippets are provided here.

The following annotations (debug/informational/warning/error messages) were 
provided:
 -DEBUG- Dependency on commons-daemon exists, no need to add for property 
commons-daemon.native.src.tgz.
 -DEBUG- Dependency on commons-daemon exists, no need to add for property 
tomcat-native.tar.gz.
 -INFO- Failed with reason build timed out
 -INFO- Project Reports in: 
/srv/gump/public/workspace/tomcat-8.0.x/output/logs-APR
 -INFO- Project Reports in: 
/srv/gump/public/workspace/tomcat-8.0.x/output/test-tmp-APR/logs
 -WARNING- No directory 
[/srv/gump/public/workspace/tomcat-8.0.x/output/test-tmp-APR/logs]



The following work was performed:
http://vmgump.apache.org/gump/public/tomcat-8.0.x/tomcat-tc8.0.x-test-apr/gump_work/build_tomcat-8.0.x_tomcat-tc8.0.x-test-apr.html
Work Name: build_tomcat-8.0.x_tomcat-tc8.0.x-test-apr (Type: Build)
Work ended in a state of : Failed
Elapsed: 1 hour 6 secs
Command Line: /usr/lib/jvm/java-8-oracle/bin/java -Djava.awt.headless=true 
-Dbuild.sysclasspath=only org.apache.tools.ant.Main 
-Dgump.merge=/srv/gump/public/gump/work/merge.xml 
-Dbase.path=/srv/gump/public/workspace/tomcat-8.0.x/tomcat-build-libs 
-Dexecute.test.nio2=false -Dtest.temp=output/test-tmp-APR 
-Djunit.jar=/srv/gump/public/workspace/junit/target/junit-4.13-SNAPSHOT.jar 
-Dobjenesis.jar=/srv/gump/public/workspace/objenesis/main/target/objenesis-2.5-SNAPSHOT.jar
 -Dexamples.sources.skip=true 
-Dcommons-daemon.jar=/srv/gump/public/workspace/apache-commons/daemon/dist/commons-daemon-20160625.jar
 
-Dtest.openssl.path=/srv/gump/public/workspace/openssl-1.0.2/dest-20160625/bin/openssl
 -Dexecute.test.nio=false 
-Dhamcrest.jar=/srv/gump/packages/hamcrest/hamcrest-core-1.3.jar 
-Dexecute.test.apr=true -Dexecute.test.bio=false 
-Dcommons-daemon.native.src.tgz=/srv/gump/public/workspace/apache-commons/daemon/dist/bin/commons-daemon-20160625-native-src.tar.gz
 -Dtest.reports=output/logs-APR -Dto
 
mcat-native.tar.gz=/srv/gump/public/workspace/apache-commons/daemon/dist/bin/commons-daemon-20160625-native-src.tar.gz
 -Djdt.jar=/srv/gump/packages/eclipse/plugins/R-4.5-201506032000/ecj-4.5.jar 
-Dtest.apr.loc=/srv/gump/public/workspace/tomcat-native-12/dest-20160625/lib 
-Dtest.relaxTiming=true -Dtest.excludePerformance=true -Dtest.accesslog=true 
-Deasymock.jar=/srv/gump/public/workspace/easymock/core/target/easymock-3.5-SNAPSHOT.jar
 -Dcglib.jar=/srv/gump/packages/cglib/cglib-nodep-2.2.jar test 
[Working Directory: /srv/gump/public/workspace/tomcat-8.0.x]
CLASSPATH: 
/usr/lib/jvm/java-8-oracle/lib/tools.jar:/srv/gump/public/workspace/tomcat-8.0.x/output/build/webapps/examples/WEB-INF/classes:/srv/gump/public/workspace/tomcat-8.0.x/output/testclasses:/srv/gump/public/workspace/ant/dist/lib/ant.jar:/srv/gump/public/workspace/ant/dist/lib/ant-launcher.jar:/srv/gump/public/workspace/ant/dist/lib/ant-jmf.jar:/srv/gump/public/workspace/ant/dist/lib/ant-junit.jar:/srv/gump/public/workspace/ant/dist/lib/ant-junit4.jar:/srv/gump/public/workspace/ant/dist/lib/ant-swing.jar:/srv/gump/public/workspace/ant/dist/lib/ant-apache-resolver.jar:/srv/gump/public/workspace/ant/dist/lib/ant-apache-xalan2.jar:/srv/gump/public/workspace/xml-commons/java/build/resolver.jar:/srv/gump/public/workspace/tomcat-8.0.x/output/build/bin/bootstrap.jar:/srv/gump/public/workspace/tomcat-8.0.x/output/build/bin/tomcat-juli.jar:/srv/gump/public/workspace/tomcat-8.0.x/output/build/lib/annotations-api.jar:/srv/gump/public/workspace/tomcat-8.0.x/output/build/lib/servlet-api.ja
 
r:/srv/gump/public/workspace/tomcat-8.0.x/output/build/lib/jsp-api.jar:/srv/gump/public/workspace/tomcat-8.0.x/output/build/lib/el-api.jar:/srv/gump/public/workspace/tomcat-8.0.x/output/build/lib/websocket-api.jar:/srv/gump/public/workspace/tomcat-8.0.x/output/build/lib/catalina.jar:/srv/gump/public/workspace/tomcat-8.0.x/output/build/lib/catalina-ant.jar:/srv/gump/public/workspace/tomcat-8.0.x/output/build/lib/catalina-storeconfig.jar:/srv/gump/public/workspace/tomcat-8.0.x/output/build/lib/tomcat-coyote.jar:/srv/gump/public/workspace/tomcat-8.0.x/output/build/lib/jasper.jar:/srv/gump/public/workspace/tomcat-8.0.x/output/build/lib/jasper-el.jar:/srv/gump/public/workspace/tomcat-8.0.x/output/build/lib/catalina-tribe