lmi
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lmi] wx_test_validate_output.cpp


From: Vadim Zeitlin
Subject: Re: [lmi] wx_test_validate_output.cpp
Date: Thu, 22 Jan 2015 21:26:26 +0100

On Thu, 22 Jan 2015 18:22:07 +0000 "Murphy, Kimberly" <address@hidden> wrote:

MK> This test is giving me difficulty. When run as part of the entire 
MK> automated GUI test suite, the 'validate_output_*' tests fail:
MK> 
MK> NOTE: starting the test suite
MK> [...]
MK> validate_output_illustration: started
MK> validate_output_illustration: ERROR (Assertion failure: Expected 
wxMessageDialog dialog was not shown. [file 
/opt/lmi/local/include/wx-3.1/wx/testing.h, line 315, in ReportFailure()].)
MK> validate_output_mec: started
MK> validate_output_mec: ERROR (Assertion failure: Expected wxDialog dialog was 
not shown. [file /opt/lmi/local/include/wx-3.1/wx/testing.h, line 315, in 
ReportFailure()].)
MK> FAILURE: 2 out of 23 tests failed.

 Do I understand correctly that this log is from a run using the secret
password option? As otherwise I'd expect to see a line

validate_output_mec: skipped (documents with extension "mec" not supported)

in it and if the secret option is not given but this line is still not
there, then I must be even more lost than I realized...

MK> What I can't figure out is why. Thoughts? 

 Unfortunately I don't have any suggestions immediately. The entire test
suite does pass for me, whether I use the secret option or not (to be 100%
precise, if I don't there is an error with the message mismatch in the
input_validation test due to COI warning wording, but this is completely
unrelated).

 Can you think about any differences between your and my environments? I'm
running the latest (r6089) lmi binary compiled in the usual way (basically
just "make install") under Windows 7 x64 from MinTTY terminal and AFAIK you
use a very similar system, don't you?

 Other than trying to reproduce the bug here (e.g. by tweaking my system to
resemble yours more closely), I'm also going to work on improving the error
messages by providing more details in them (this is something that I really
should have done a long time ago anyhow), but this requires changes to
wxWidgets itself and so won't happen immediately, unfortunately.

 And until then all I can do is to explain how I would debug the problem
myself:

1. I'd start by sprinkling the code of the test with wxLogMessage() calls
   to verify which assert exactly fails: in the illustration output test,
   it could be either the first message box with the expected warning or
   the second one asking whether the file should be saved; in the MEC test
   it could be the MEC parameters dialog shown when creating a new MEC file
   or opening an existing one. I.e., concretely, I'd put wxLogMessage()
   between the two wxTEST_DIALOG() occurrences and then if it appears in
   the output I'd know that it's the second one that fails and if it
   doesn't, I'd know that the culprit is the first one.

   [This part should definitely be not needed and done by the testing
   harness myself and if you can wait a bit, I'll update it to provide this
   information without any extra effort on your part]

2. Once I know where exactly the tests fail, I'd insert a call to
   wxSafeShowMessage() just before. This would pause the test execution
   until the message box shown by this function is dismissed (unlike the
   normal message boxes such as shown by e.g. wxMessageBox() itself,
   the "safe" message boxes shown by wxSafeShowMessage() are not
   intercepted even during testing). And so I'd be able to examine the GUI
   state, e.g. if there are no unexpected windows shown or anything else
   out of ordinary.

3. If this still doesn't allow to diagnose the problem, I'd replace
   wxSafeShowMessage() with

        wxEventLoop evtLoop;
        evtLoop.Run();

   As this would stop the test, just as above, but without showing a modal
   dialog so now I'd be able to manually execute whichever action the test
   is simulating and interactively check its results -- and hopefully
   understand why they don't conform to the test expectations.


 As I said, (1) can be improved/automated. Unfortunately I don't really
know what could be done about the rest -- debugging the UI test failures is
tricky. The only advice I can give is to do it using 2 machines (possibly a
physical and a virtual machine running on it, of course) as debugging can
be done without interfering with the GUI state then.

 Sorry for the lack of more help,
VZ

reply via email to

[Prev in Thread] Current Thread [Next in Thread]