cool, thanks ... easy enough to fix the SQL statement for now ;-)

On Tue, May 16, 2017 at 6:27 PM, Kevin Risden <compuwizard...@gmail.com> wrote:
> Well didn't take as long as I thought:
> https://issues.apache.org/jira/browse/CALCITE-1306
>
> Once Calcite 1.13 is released we should upgrade and get support for this
> again.
>
> Kevin Risden
>
> On Tue, May 16, 2017 at 7:23 PM, Kevin Risden <compuwizard...@gmail.com>
> wrote:
>
>> Yea this came up on the calcite mailing list. Not sure if aliases in the
>> having clause were going to be added. I'll have to see if I can find that
>> discussion or JIRA.
>>
>> Kevin Risden
>>
>> On May 16, 2017 18:54, "Joel Bernstein" <joels...@gmail.com> wrote:
>>
>>> Yeah, Calcite doesn't support field aliases in the having clause. The
>>> query
>>> should work if you use count(*). We could consider this a regression, but
>>> I
>>> think this will be a won't fix.
>>>
>>> Joel Bernstein
>>> http://joelsolr.blogspot.com/
>>>
>>> On Tue, May 16, 2017 at 12:51 PM, Timothy Potter <thelabd...@gmail.com>
>>> wrote:
>>>
>>> > This SQL used to work pre-calcite:
>>> >
>>> > SELECT movie_id, COUNT(*) as num_ratings, avg(rating) as aggAvg FROM
>>> > ratings GROUP BY movie_id HAVING num_ratings > 100 ORDER BY aggAvg ASC
>>> > LIMIT 10
>>> >
>>> > Now I get:
>>> > Caused by: java.io.IOException: -->
>>> > http://192.168.1.4:8983/solr/ratings_shard2_replica1/:Failed to
>>> > execute sqlQuery 'SELECT movie_id, COUNT(*) as num_ratings,
>>> > avg(rating) as aggAvg FROM ratings GROUP BY movie_id HAVING
>>> > num_ratings > 100 ORDER BY aggAvg ASC LIMIT 10' against JDBC
>>> > connection 'jdbc:calcitesolr:'.
>>> > Error while executing SQL "SELECT movie_id, COUNT(*) as num_ratings,
>>> > avg(rating) as aggAvg FROM ratings GROUP BY movie_id HAVING
>>> > num_ratings > 100 ORDER BY aggAvg ASC LIMIT 10": From line 1, column
>>> > 103 to line 1, column 113: Column 'num_ratings' not found in any table
>>> >         at org.apache.solr.client.solrj.io.stream.SolrStream.read(
>>> > SolrStream.java:235)
>>> >         at com.lucidworks.spark.query.TupleStreamIterator.fetchNextTupl
>>> e(
>>> > TupleStreamIterator.java:82)
>>> >         at com.lucidworks.spark.query.TupleStreamIterator.hasNext(
>>> > TupleStreamIterator.java:47)
>>> >         ... 31 more
>>> >
>>>
>>

Reply via email to