I'm putting the final touches on a paper I'm submitting for the annual JavaOne Call for Papers. In my preparation last night I had some questions about the process that I sent to j1papers@sun.com . I thought the answers might be generally useful, so they're reproduced below:
Q: What is the attachment field for? Can I submit a PDF with my presentation outline and code samples?
A: Yes, this is exactly what this "field" is for. It gives the review team more insight into your proposal.
Q: How interactive is the review process? Does my submission need to be perfect, or will the committee be willing to accept the presentation but provide feedback for tailoring it? I'm willing to narrow the focus if they think it's too broad a topic or I'm covering too much.
A: We try to make the process interactive. The reviewers have access to your contact information and may engage in discussion with you regarding details that may help in their decision making.
I hope that helps other/future presenters. I'll try to post more information as I go through the submission process.
Thursday, November 15, 2007
Tuesday, February 21, 2006
I tracked down a very frustrating error today that turned out to be a non-issue. The problem was in Eclipse with a new project I was configuring for one of our developers. The project is supposed to produce 1.3 compatible classfiles and Eclipse was giving me the following error:
'Incompatible class files version in required binaries' followed by the path to the rt.jar file in my JDK directory.
I couldn't see what was different about this developer's system from all the others that were similarly configured without error. I finally figured out that this error has a setting under Preferences... -> Java | Compiler | Building -> "Incompatible required binaries". The developer having trouble had this setting set to "Error". On my system, it was set to "Ignore", which apparently is not an issue.
I suppose we might run into trouble in the future if we try to use the jars produced in a 1.3 JVM, but I kind of doubt it.
'Incompatible class files version in required binaries' followed by the path to the rt.jar file in my JDK directory.
I couldn't see what was different about this developer's system from all the others that were similarly configured without error. I finally figured out that this error has a setting under Preferences... -> Java | Compiler | Building -> "Incompatible required binaries". The developer having trouble had this setting set to "Error". On my system, it was set to "Ignore", which apparently is not an issue.
I suppose we might run into trouble in the future if we try to use the jars produced in a 1.3 JVM, but I kind of doubt it.
Friday, May 06, 2005
Java generics are cool.
This:
is much cleaner than:
This:
Collections.sort(administratableUsers,
new Comparator<user>()
{
public int compare(User u1, User u2)
{
return u1.getUsername().compareTo(u2.getUsername());
}
});
is much cleaner than:
Collections.sort(administratableUsers,
new Comparator()
{
public int compare(Object o1, Object o2)
{
User u1 = (User)o1;
User u2 = (User)o2;
return u1.getUsername().compareTo(u2.getUsername());
}
});
Monday, April 25, 2005
Someone on this Javalobby post asked me to elaborate on our use of Ant and copy filtering as a build configuration mechanism, so here it is.
The problem: projects tend to have lots of different configuration files that need to be changed based on the machine or circumstance in which they are installed. Local file paths, database passwords, and host names are just a few of the settings that typically need to be set. Frequently, the same value appears in multiple configuration files allowing for the problems always associated with dual-maintenance. Also, it's often inconvenient to locate all the configuration files in the same place, so a user (a developer or configuration engineer) needs to know where they all live in order to configure them (web.xml lives in WEB-INF, log4j.xml lives in the classpath, etc.)
The solution: We use Ant to solve this problem. To address the dual-maintenance issue, we have a single file that a user needs configure called build.properties that lives in the same directory as the build.xml file. It's typically copied from another file called build.properties.template that lives in CVS. Any values in build.properties override those in build.properties.template.
Example build.properties.template:
In our Ant build process, we have tasks that handle configuring all of our various files. They look like this:
The filtersets are defined like this:
Here's a snippet of the web.xml-template file that becomes web.xml:
That's an artificial example for the purposes of this article; there are better ways to get the hostname, if you should be getting it at all.
Tip: At the top of each template file, put in a note (we actually have a token that also gets replaced) that warns users that the file has been generated and they should not edit it. It's very frustrating to make changes to a file and find they don't take effect because they've been clobbered by the build process.
The problem: projects tend to have lots of different configuration files that need to be changed based on the machine or circumstance in which they are installed. Local file paths, database passwords, and host names are just a few of the settings that typically need to be set. Frequently, the same value appears in multiple configuration files allowing for the problems always associated with dual-maintenance. Also, it's often inconvenient to locate all the configuration files in the same place, so a user (a developer or configuration engineer) needs to know where they all live in order to configure them (web.xml lives in WEB-INF, log4j.xml lives in the classpath, etc.)
The solution: We use Ant to solve this problem. To address the dual-maintenance issue, we have a single file that a user needs configure called build.properties that lives in the same directory as the build.xml file. It's typically copied from another file called build.properties.template that lives in CVS. Any values in build.properties override those in build.properties.template.
Example build.properties.template:
uk.co.ourCompany.mainApp.hostname=localhostExample build.properties:
uk.co.ourCompany.mainApp.basedir=/home/builder/src/mainApp
uk.co.ourCompany.mainApp.hostname=bender.ourCompany.co.uk
uk.co.ourCompany.mainApp.basedir=c:/src/mainApp
In our Ant build process, we have tasks that handle configuring all of our various files. They look like this:
<target name="setupWebXml">
<copy overwrite="true"
file="${uk.co.ourCompany.mainApp.basedir}/web/WEB-INF/web.xml-template"
tofile="${uk.co.ourCompany.mainApp.basedir}/web/WEB-INF/web.xml">
<filterset refid="build.propertiesFilter" />
<filterset refid="build.properties.templateFilter" />
</copy>
</target>
The filtersets are defined like this:
<!-- The following two filter sets are referenced in each of the following config targets.
First the build.properties tokens are replaced, then the build.properties.template
tokens are replaced (only if they didn't already exist in build.properties) -->
<filterset begintoken="{" endtoken="}" id="build.propertiesFilter"
description="Used to parse tokens in config files into their associated values in build.properties.">
<filtersfile file="build.properties"/>
</filterset>
<filterset begintoken="{" endtoken="}" id="build.properties.templateFilter"
description="Used to parse tokens in config files into their associated values from build.properties.template.">
<filtersfile file="build.properties.template"/>
</filterset>
Here's a snippet of the web.xml-template file that becomes web.xml:
<context-param>
<param-name>Hostname</param-name>
<param-value>{com.lollipoplearning.devEditHost}</param-value>
</context-param>
That's an artificial example for the purposes of this article; there are better ways to get the hostname, if you should be getting it at all.
Tip: At the top of each template file, put in a note (we actually have a token that also gets replaced) that warns users that the file has been generated and they should not edit it. It's very frustrating to make changes to a file and find they don't take effect because they've been clobbered by the build process.
Thursday, October 07, 2004
I wanted to make a note of a solution I found today in the hopes that it will be helpful to others and easier to find.
Until today, hot-syncing my Sony Clie PEG-SJ22 with my Dell Inspiron laptop was so slow as to be useless (more than 20 minutes before I gave up and cancelled the operation). I finally was able to track down a fix via a lot of googling and came up with the following.
The problem is the with the SMC IR driver used by default in WinXP. Apparently, it tries to communicate much faster than the Clie can handle. Throttling it back via the driver controls doesn't fix it. The trick is to replace the SMC driver with the generic Windows IR driver. This means the following:
Until today, hot-syncing my Sony Clie PEG-SJ22 with my Dell Inspiron laptop was so slow as to be useless (more than 20 minutes before I gave up and cancelled the operation). I finally was able to track down a fix via a lot of googling and came up with the following.
The problem is the with the SMC IR driver used by default in WinXP. Apparently, it tries to communicate much faster than the Clie can handle. Throttling it back via the driver controls doesn't fix it. The trick is to replace the SMC driver with the generic Windows IR driver. This means the following:
- Go to Control Panel -> Wirelss Link -> Hardware tab -> Properties for the IR device -> Driver tab
- Click Update Driver...
- Install from specific location. Next.
- Don't Search. I will choose the driver to install.
- Uncheck Show Compatible Hardware
- Choose Manufacturer (Standard Infrared Port)
- Device: Built-in Infrared Device
Wednesday, September 01, 2004
I wanted to post the following "instructions" in case they'd be of use to anyone else.
My new hard drive came today (80GB Hitachi Deskstar SATA) and I was able to image the old parallel IDE drive to the new one without a hitch. Just to be clear, what I wanted to end up with was the new drive exactly replacing the old one, but with more space. I may try to use the old one as a secondary drive later, but not right away.
Here's what I did (expanding on these Instructions)
1) Installed the SATA drivers for my Motherboard (K8T-NEO FSR) into Windows.
2) Burnt the latest Knoppix Linux-on-CD (3.4) to disc.
3) Rebooted with the CD in the drive. Gave it the "knoppix26 lang=us" start option. If you go with the default, you get the 2.4 Kernel and it won't support SATA.
4) Opened the root shell and unmounted the C: drive ' umount /mnt/hda1'. Double-checked that this worked by running 'mount' and verifying neither hda nor hdg (the device assigned to my Sata drive) appeared. BTW, I didn't expect it to be /hdg. I only noticed the name during the boot sequence.
5) From the root shell, I ran 'dd if=/dev/hda of=/dev/hdg'. That basically says, copy every byte from /dev/hda to /dev/hdg. This took about an hour to do 20GB. You can check on the process while it's running by doing a 'kill -SIGUSR1 <pid of the dd process>'. If you don't know Unix, don't worry about it. Just be patient. For reference, I was getting about a 9MB/s transfer rate.
6) At this point, the new drive is exactly like the old one, plus a bunch of unusable wasted space. To enlarge the 20GB partition to 80GB, I used QtParted (partition editor) which comes with Knoppix.
7) I disconnected the old IDE drive and then had to tell my BIOS to use the SATA drive at boot time.
That's it.
My new hard drive came today (80GB Hitachi Deskstar SATA) and I was able to image the old parallel IDE drive to the new one without a hitch. Just to be clear, what I wanted to end up with was the new drive exactly replacing the old one, but with more space. I may try to use the old one as a secondary drive later, but not right away.
Here's what I did (expanding on these Instructions)
1) Installed the SATA drivers for my Motherboard (K8T-NEO FSR) into Windows.
2) Burnt the latest Knoppix Linux-on-CD (3.4) to disc.
3) Rebooted with the CD in the drive. Gave it the "knoppix26 lang=us" start option. If you go with the default, you get the 2.4 Kernel and it won't support SATA.
4) Opened the root shell and unmounted the C: drive ' umount /mnt/hda1'. Double-checked that this worked by running 'mount' and verifying neither hda nor hdg (the device assigned to my Sata drive) appeared. BTW, I didn't expect it to be /hdg. I only noticed the name during the boot sequence.
5) From the root shell, I ran 'dd if=/dev/hda of=/dev/hdg'. That basically says, copy every byte from /dev/hda to /dev/hdg. This took about an hour to do 20GB. You can check on the process while it's running by doing a 'kill -SIGUSR1 <pid of the dd process>'. If you don't know Unix, don't worry about it. Just be patient. For reference, I was getting about a 9MB/s transfer rate.
6) At this point, the new drive is exactly like the old one, plus a bunch of unusable wasted space. To enlarge the 20GB partition to 80GB, I used QtParted (partition editor) which comes with Knoppix.
7) I disconnected the old IDE drive and then had to tell my BIOS to use the SATA drive at boot time.
That's it.
Friday, August 27, 2004
I want to mention a cool plugin I recently found for Eclipse. It's called implementors and it addresses a frustration I've had for a while. It has to do with interfaces and the implementors of those interfaces. We try to write all our code to interfaces and then write implementing classes to handle the actual processing. So if you need to interact with the database, you obtain an instance of IDataService. But what happens when you want to look at the code behind one of those methods? Pressing F3 (go to definition) takes you to the interface, no terribly helpful. What I really want is to go to the concrete class, DataServiceImpl, that is always behind that interface. The implementors plugin gives you Alt-F3 that takes you straight to the implementation. Beautiful.
Friday, August 13, 2004
I had a need today to dynamically add an onload handler to a web page, if a certain block of <script> was included in the page. Not wanting to overwrite any previously assigned onload handlers, this page had the solution.
Tuesday, August 03, 2004
Last month I built a JSP and Servlet-based list paging framework to take long lists of items and display them over multiple pages, much like google. Before I did this, I of course looked around for free, public implementations of the same thing as I see no reason to re-invent the wheel. However, I wasn't able to find one to fit my needs. JSP Tags has a pager library that seems to be very popular, but it requires roundtrips to the server for each page and wants to manipulate the URL for its refreshes. I wanted to avoid (obvious) roundtrips and have more control over the refresh behavior.
It should be noted that I only had IE 5.5+ (IE6, really) as a target platform, although I usually make reasonable attempts to keep the latest Mozilla happy as well. This is a nice luxury I know and it likely made my solution more workable.
There are essentially two parts to my framework. On the server side, there's the ListPager object that handles breaking a list into "pages" as well as filtering items out of the list that don't match a given Predicate. This object is built by a ListPagerFactory and then stuffed into the request. The request itself is initiated in a hidden IFRAME that is created by the other half of the framework, the listPager.js javascript file. This script is responsible for initiating requests to the server for new pages and then handling a "callback" indicating the latest page has loaded and should be integrated into the visible page.
[...blah blah, more explanation how it works...]
The reason I felt I should blog this is that I'm debating internally if I should release my framework to the public, probably via SourceForge. I feel like I need to justify why the world needs another paging framework. I guess I'm unsure if the answer "well, I needed one" is good enough.
I'll keep thinking about it...
It should be noted that I only had IE 5.5+ (IE6, really) as a target platform, although I usually make reasonable attempts to keep the latest Mozilla happy as well. This is a nice luxury I know and it likely made my solution more workable.
There are essentially two parts to my framework. On the server side, there's the ListPager object that handles breaking a list into "pages" as well as filtering items out of the list that don't match a given Predicate. This object is built by a ListPagerFactory and then stuffed into the request. The request itself is initiated in a hidden IFRAME that is created by the other half of the framework, the listPager.js javascript file. This script is responsible for initiating requests to the server for new pages and then handling a "callback" indicating the latest page has loaded and should be integrated into the visible page.
[...blah blah, more explanation how it works...]
The reason I felt I should blog this is that I'm debating internally if I should release my framework to the public, probably via SourceForge. I feel like I need to justify why the world needs another paging framework. I guess I'm unsure if the answer "well, I needed one" is good enough.
I'll keep thinking about it...
Tuesday, July 27, 2004
Something recently pointed me to the following article about the Spring Framework. Although it's a long read that I'm still digesting, it sounds very intriguing. Definitely something I'm going to consider the next time I start a new project.
Friday, July 23, 2004
I just want to write what a good experience I've had recently with HTMLArea, the WYWSIWYG HTML editor that replaces textarea elements in your web page. This thing is so easy to use and configure. I was able to drop it in in less than 15 minutes including unpacking the .zip and choosing which buttons I wanted to appear. I also tried FCKEditor, but it doesn't allow the setting of text colors, which I felt was a requirement. I understand that <font> is deprecated and I should use styles, but these WYSIWYG controls just don't work well for that paradigm, IMHO. The users they are targeted at understand Word, not HTML/CSS.
Friday, July 02, 2004
Friday, June 18, 2004
Just started setting up Tomcat 5 today and wanted to leave some notes on a few things I had to do to get it to work.
I had to change the doctype of my web.xml. This page had an excellent example of what to do.
I added this to enable EL in JSPs
<jsp-config>
<jsp-property-group>
<url-pattern>*.jsp</url-pattern>
<el-enabled>true</el-enabled>
<scripting-enabled>true</scripting-enabled>
</jsp-property-group>
</jsp-config>
I had to replace my Jakarta Taglibs Standard 1.0 library with the 1.1 version. 1.0 is for Servlet 2.3, 1.1 is for 2.4. I discovered this when the JSP compiler complained about an EL expression in the value attribute of the <c:param> tag.
I had to change the doctype of my web.xml. This page had an excellent example of what to do.
I added this to enable EL in JSPs
<jsp-config>
<jsp-property-group>
<url-pattern>*.jsp</url-pattern>
<el-enabled>true</el-enabled>
<scripting-enabled>true</scripting-enabled>
</jsp-property-group>
</jsp-config>
I had to replace my Jakarta Taglibs Standard 1.0 library with the 1.1 version. 1.0 is for Servlet 2.3, 1.1 is for 2.4. I discovered this when the JSP compiler complained about an EL expression in the value attribute of the <c:param> tag.
Tuesday, June 15, 2004
Just a quickie.
We ran into an issue with our app today that at its root had a problem with the default value for the "session" attribute of JSP pages. Lesson learned: session is set to true by default. In other words, unles you specify the following:
<%@ page session="false" %>
your JSPs will automatically request a session object.
We ran into an issue with our app today that at its root had a problem with the default value for the "session" attribute of JSP pages. Lesson learned: session is set to true by default. In other words, unles you specify the following:
<%@ page session="false" %>
your JSPs will automatically request a session object.
Monday, June 07, 2004
This week I've been learning a great deal about the image handling that is built in the JDK 1.4. Specifically, the javax.imageio.* packages. I've been mainly interested in two things: validating that a file is an image and gathering its meta data, and resizing the image. If users upload an image that is larger than we want to allow, I want to be able to reduce the image to an allowed size so they have the choice of using the smaller image. That way the upload wasn't a complete waste of time.
I accomplished the first step with code like this:
I could probably have been more efficient and used the ImageReader I found in the first part of the code to actually read in the image instead of relying on ImageIO.read(). I've only today just been able to figure out that that's an option. The ImageIO guide claims the API was developed with application developer ease-of-use as a top priority. I'm sure it was, but I still think people like myself would benefit from some clear recipes for common operations. Almost everyone posting forum questions about image resizing is doing so for the same reasons as me, so it's clearly a confusing issue. It doesn't help that there's another API, the Java Advanced Imaging API, to confuse the issue. And there appear to be multiple ways to accomplish the goal of resizing an image making the whole thing rather confusing. </rant>
Resizing the image was a bit more difficult with the issue complicated by a bug in the 1.4.1 JDK. Upgrading to 1.4.2_4 fixes the issue (handling of certain JPEGs), but it still took me a while to figure out. Here's how I resize:
Unfortunately, I'm still not accomplishing my goal with 100% success. I can reduce the file size of an image by scaling it by 50%, but on my test file, 90% scaling actually makes the file considerably larger. And there's really no way to specify a scaling based on file size that I've found. Perhaps I'll have to break down and post questions to the java.sun.com forums...
I accomplished the first step with code like this:
ImageInfo info = new ImageInfo();
try
{
Iterator imageReaders = ImageIO.getImageReaders(ImageIO.createImageInputStream(formFile.getInputStream()));
while (imageReaders.hasNext())
{
ImageReader reader = (ImageReader) imageReaders.next();
info.type = ImageType.getImageType(reader.getFormatName());
if( info.type != null )
{
break;
}
}
if( info.type == null )
{
return null;
}
try
{
BufferedImage theImage = ImageIO
.read(formFile.getInputStream());
info.height = theImage.getHeight();
info.width = theImage.getWidth();
info.fileSize = formFile.getFileSize();
theImage = null;
}
catch (Exception e)
{
log.debug("Exception caught trying to read image", e);
return null;
}
log.debug("Parsed image info: " + info);
I could probably have been more efficient and used the ImageReader I found in the first part of the code to actually read in the image instead of relying on ImageIO.read(). I've only today just been able to figure out that that's an option. The ImageIO guide claims the API was developed with application developer ease-of-use as a top priority. I'm sure it was, but I still think people like myself would benefit from some clear recipes for common operations. Almost everyone posting forum questions about image resizing is doing so for the same reasons as me, so it's clearly a confusing issue. It doesn't help that there's another API, the Java Advanced Imaging API, to confuse the issue. And there appear to be multiple ways to accomplish the goal of resizing an image making the whole thing rather confusing. </rant>
Resizing the image was a bit more difficult with the issue complicated by a bug in the 1.4.1 JDK. Upgrading to 1.4.2_4 fixes the issue (handling of certain JPEGs), but it still took me a while to figure out. Here's how I resize:
File inFile = new File(...);
InputStream is = new FileInputStream( inFile );
BufferedImage bufIn = ImageIO.read(is);
is.close();
// scale is a float with 1 being no change in size, 2 doubling, 0.5 halving...
AffineTransformOp op = new AffineTransformOp(AffineTransform.getScaleInstance(scale, scale), null);
bufIn = op.filter( bufIn, null );
File outFile = new File(...);
ImageIO.write(bufIn, imageInfo.type.getFormatName(), outFile);
Unfortunately, I'm still not accomplishing my goal with 100% success. I can reduce the file size of an image by scaling it by 50%, but on my test file, 90% scaling actually makes the file considerably larger. And there's really no way to specify a scaling based on file size that I've found. Perhaps I'll have to break down and post questions to the java.sun.com forums...
Thursday, June 03, 2004
Since we're going to allow our users to upload/store images, we need to validate the files they send us. Doing some research on the web, it appears we need to check the following:
1) The actual image binary data. Is the file a valid image? Is it a web image type (E.g. jpeg, gif, png)
2) The image does not have extreme dimensions. No 1x20000 images should be allowed
3) The filename extensions should match the image data. We can correct this automatically if we can identify the image type.
4) The image size should also be capped. The Struts Upload is already capping file upload sizes for us. However, we might want to cap images even lower. Have to talk to marketing about that...
The javax.imageio package seems to have all the tools I need for this.
1) The actual image binary data. Is the file a valid image? Is it a web image type (E.g. jpeg, gif, png)
2) The image does not have extreme dimensions. No 1x20000 images should be allowed
3) The filename extensions should match the image data. We can correct this automatically if we can identify the image type.
4) The image size should also be capped. The Struts Upload is already capping file upload sizes for us. However, we might want to cap images even lower. Have to talk to marketing about that...
The javax.imageio package seems to have all the tools I need for this.
Tuesday, June 01, 2004
We've decided that our site needs to use URL-rewriting instead of cookies as a means of tracking sessions. (We're hoping it will defeat the nasty habit one of our customer's proxy caches has of showing some users the data for other users. The jsessionid in the URL will hopefully make all URLs unique to each user).
I've learned that to turn on URL-rewriting you have to turn off cookies for the <context> in the Tomcat server.xml file and restart Tomcat. It also appears you have to restart your browser (IE in my case). I tried setting cookies="false" in META-INF/context.xml, but it doesn't appear to have any effect in Tomcat 4. I've heard Tomcat 5 is more respectful of settings in context.xml.
Of course, to make URL rewriting work, you have to pass all your URLs through HttpServletResponse.encodeURL() or use the appropriate taglib tag (<html:rewrite>, <c:url>, etc...)
I've learned that to turn on URL-rewriting you have to turn off cookies for the <context> in the Tomcat server.xml file and restart Tomcat. It also appears you have to restart your browser (IE in my case). I tried setting cookies="false" in META-INF/context.xml, but it doesn't appear to have any effect in Tomcat 4. I've heard Tomcat 5 is more respectful of settings in context.xml.
Of course, to make URL rewriting work, you have to pass all your URLs through HttpServletResponse.encodeURL() or use the appropriate taglib tag (<html:rewrite>, <c:url>, etc...)
Monday, May 17, 2004
Ran into a small problem when I tried to run our webapp in Tomcat 5. I got this error, "can't declare any more prefixes in this context" along with a stack trace. As far as I can tell, this is caused by conflicting XML parser versions. In our case, we had a ReportMill6.jar in WEB-INF/lib that contained a copy of the Crimson parser. That was enough to cause this bizarre error to appear when I tried to view a JSP. Fixing it meant deleting the jar.
Tuesday, May 11, 2004
I just ran into a small problem with our site layout where one of the boxes would mess up the whole layout if its content was too long (it's a tree control with the text set to not wrap). I figured the solution would be the CSS property, overflow. In compliant browsers, when this is applied to a DIV and set to auto, scrollbars will appear if the contents of the div would overflow the box. The problem is, IE doesn't properly handle the overflow property for divs. However, IE does have a non-standard extension, overflow-x and overflow-y. So the solution to my problem was to give the div the following style:
<div style="width: 100%; overflow: auto; overflow-x: auto; overflow-y: hidden;" >
The first overflow style is for compliant browsers. The overflow-x causes the scrollbar to appear if necessary. The overflow-y disallows vertical scrollbars.
<div style="width: 100%; overflow: auto; overflow-x: auto; overflow-y: hidden;" >
The first overflow style is for compliant browsers. The overflow-x causes the scrollbar to appear if necessary. The overflow-y disallows vertical scrollbars.
Monday, May 10, 2004
Today I used a new tool for the first time. It's a DOM viewer for IE (it may be for other browsers as well, but Moz has one builtin, so who cares). It's from Brain Jar. You add a link to their domviewer.html page from the page you want to inspect and it brings up all the properties for the top-level object. Clicking on an object with children expands to show the children. It doesn't work perfectly (E.g. I wasn't able to drill down to a specific IFRAME), but it was able to help me find the property I was looking for.
Subscribe to:
Comments (Atom)